Wagner and Furst exhaustively explore the inner workings and implications of AI in their new book, “AI Supremacy: Winning in the Era of Machine Learning”. Each chapter focuses on the current and future state of AI within a specific industry, country or society in general. Special emphasis is placed on how AI will shape the domestic, diplomatic and military landscapes of the US, EU and China.
Here is an interview with Daniel Wagner
Can you briefly explain the differences between artificial intelligence, machine learning, and deep learning?
Artificial intelligence (AI) is the overarching science and engineering associated with intelligent algorithms, whether or not they learn from data. However, the definition of intelligence is subject to philosophical debate-even the terms algorithms can be interpreted in a wide context. This is one of the reasons why there is some confusion about what AI is and what is not, because people use the word loosely and have their own definition of what they believe AI is. People should understand AI to be a catch-all term for technology which tends to imply the latest advances in intelligent algorithms, but the context in how the phrase is used determines its meaning, which can vary quite widely.
Machine learning (ML) is a subfield of AI that focuses on intelligent algorithms that can learn automatically (without being explicitly programmed) from data. There are three general categories of ML: supervised machine learning, unsupervised machine learning, and reinforcement learning.
Deep learning (DL) is a subfield of ML that imitates the workings of the human brain (or neural networks) in the processing of data and creating patterns for use in decision-making. It is true that the way the human brain processes information was one of the main inspirations behind DL, but it only mimics the functioning of neurons. This doesn’t mean that consciousness is being replicated, because we really do not understand all the underlying mechanics driving consciousness. Since DL is a rapidly evolving field there are other more general definitions of it, such as a neural network with more than two layers. The idea of layers is that information is processed by the DL algorithm at one level and then passes information on to the next level so that higher levels of abstraction and conclusions can be drawn about data.
Is China’s Social Credit Score system about to usher in an irreversible Orwellian nightmare there? How likely is it to spread to other dictatorships?
The social credit system that the Chinese government is in the process of unleashing is creating an Orwellian nightmare for some of China’s citizens. We say “some” because many Chinese citizens do not necessarily realize that it is being rolled out. This is because the government has been gradually implementing versions of what has become the social credit system over a period of years without calling it that. Secondly, most Chinese citizens have become numb to the intrusive nature of the Chinese state. They have been poked and prodded in various forms for so long that they have become accustomed to, and somewhat accepting, of it. That said, the social credit system has real consequences for those who fall afoul of it; they will soon learn about the consequences of having done so, if they have not learned already.
As we note in the book, the Chinese government has shared elements of its social credit system technology with a range of states across the world. There is every reason to believe that authoritarian governments will wish to adopt the technology and use it for their own purposes. Some have already done so.
How can we stop consumer drones from being used to aid in blackmail, burglary, assassination, and terrorist attacks?
As Daniel notes in his book Virtual Terror, governments are having a difficult time keeping track of the tens of millions of drones that are in operation in societies around the world. Registering them is largely voluntary and there are too few regulations in place governing their use. Given this, there is little that can be done, at this juncture, to prevent them from being used for nefarious purposes. Moreover, drones’ use on the battlefield is transforming the way individual battles will be fought, and wars will be waged. We have a chapter in the book devoted to this subject.
Google, YouTube, Twitter and Facebook have been caught throttling/ending traffic to many progressive (TeleSur, TJ Kirk) and conservative (InfoWars, PragerU) websites and channels. Should search engines and social media platforms be regulated as public utilities, to lend 1st Amendment protections to the users of these American companies?
The current battle being waged–in the courts, legislatures, and the battlefield of social media itself- are already indicative of how so many unanswered questions associated with the rise of social media are being addressed out of necessity. It seems that no one–least of all the social media firms–wants to assume responsibility when things go wrong or uncomfortable questions must be answered. Courts and legislatures will ultimately have to find a middle ground response to issues such as first amendment protections, but this will likely remain a moving target for some time to come, as there is no single black or white answer, and, as each new law comes into effect, its ramifications will become known, which means the laws will undoubtedly need to become subsequently modified.
Do you think blockchain will eventually lead to a golden era of fiscal transparency?
This is hard to say. On one hand, the rise of cryptocurrencies brought with them the promise of money outside the control of governments and large corporations. However, cryptocurrencies have been subject to a number of high-profile heists and there are still some fundamental issues with them, such as the throughput of Bitcoin which is only able to process around a few transactions per second. This makes some cryptocurrencies less viable for real world transactions and everyday commerce.
The financial services industry has jumped on the blockchain bandwagon, but they have taken the open concept of some cryptocurrencies and reinvented it as distributed ledger technology (DLT). To be part of DLTs created by financial institutions, a joining member must be a financial institution. For this reason, the notion of transparency is not relevant, since the DLT will be controlled by a limited number of members and only they will determine what information is public and what is not.
The other issue with the crypto space right now is that is filled with fraud. At the end of the day, crypto is an asset class like gold or any other precious metal. It does not actually produce anything; The only real value it has is the willingness of another person to pay more for it in the future. It is possible that a few cryptocurrencies will survive long-term and become somewhat viable, but the evolution of blockchain will likely continue to move towards DLT that more people will trust. Also, governments are likely to issue their own cryptocurrencies in the future, which will bring it into the mainstream.
Taiwan has recently started using online debate forums to help draft legislation, in a form of direct democracy. Kenya just announced that they will post presidential election results on a blockchain. How can AI and blockchain enhance democracy?
Online debate forums are obviously a good thing, because having the average person engage in political debate and being able to record and aggregate voting results will create an opportunity for more transparency. The challenge becomes how to verify the identities of the people submitting their feedback. Could an AI program be designed to submit feedback millions of times to give a false representation of the public’s concerns?
Estonia has long been revered as the world’s most advanced digital society, but researchers have pointed out serious security flaws in its electronic voting system, which could be manipulated to influence election outcomes. AI can help by putting in place controls to verify that the person providing feedback for legislation is a citizen. Online forums could force users to take a pic of their face next to their passport to verify their identity with facial recognition algorithms.
Should an international statute be passed banning scientists from installing emotions-specially pain and fear-into AI?
Perhaps, for now at least, the question should be: should scientists ban the installation of robots or other forms of AI to imitate human emotions? The short answer to this is that it depends. On one hand, AI imitating human emotions could be a good thing, such as when caring for the elderly or teaching a complex concept to a student. However, a risk is that when AI can imitate human emotions very well, people may believe they have gained a true friend who understands them. It is somewhat paradoxical that the rise of social media has connected more of us, but some people still admit that they lack meaningful relationships with others.
You don’t talk much about India in your book. How far behind are they in the AI race, compared to China, the US & EU?
Surprisingly, many of the world’s countries have only adopted a formal AI strategy in the last year. India is one of them; It only formally adopted an AI strategy in 2018 and lags well behind China, the EU, the US, and variety of other countries. India has tremendous potential to meaningfully enter the race for AI supremacy and become a viable contender, but it still lacks a military AI strategy. India already contributes to advanced AI-oriented technology through its thriving software, engineering, and consulting sectors. Once it ramps up a national strategy, it should quickly become a leader in the AI arena–to the extent that it devotes sufficient resources to that strategy and swiftly and effectively implements it. That is not a guaranteed outcome, based on the country’s prior history with some prior national initiatives. We must wait and see if India lives up to its potential in this arena.
On page 58 you write, “Higher-paying jobs requiring creativity and problem-solving skills, often assisted by computers, have proliferated… Demand has increased for lower skilled restaurant workers, janitors, home health aides, and others providing services that cannot be automated.” How will we be able to stop this kind of income inequality?
In all likelihood, the rise of AI will, at least temporarily, increased the schism between highly paid white-collar jobs and lower paid blue-collar jobs, however, at the same time, AI will, over decades, dramatically alter the jobs landscape. Entire industries will be transformed to become more efficient and cost effective. In some cases this will result in a loss of jobs while in others it will result in job creation. What history has shown is that, even in the face of transformational change, the job market has a way of self-correcting; Overall levels of employment tend to stay more or less the same. We have no doubt that this will prove to be the case in this AI-driven era. While income inequality will remain a persistent threat, our expectation is that, two decades from now, it will be no worse than it is right now.
AI systems like COMPAS and PredPol have been exposed for being racially biased. During YouTube’s “Adpocalypse”, many news and opinion videos got demonetized by algorithms indiscriminately targeting keywords like ‘war’ and ‘racism”. How can scientists and executives prevent their biases from influencing their AI?
This will be an ongoing debate. Facebook removed a PragerU video where a woman was describing the need for strong men in society and the problem with feminizing them. Ultimately, Facebook said it was a mistake and put the video back up. So the question becomes who decides what constitutes “racist” or “hate speech” content? The legal issues seem to emerge, if it can be argued that the content being communicated are calling on people to act in a violent way.
Could the political preferences of a social media company’s executives overrule the sensibilities of the common person to make up their own mind? On the other hand, India has a string of mob killings from disinformation campaigns on WhatsApp, mostly from people who were first time smartphone users. Companies could argue that some people are not able to distinguish between real and fake videos so content must be censored in that case.
Ultimately, executives and scientists will need to have an open and ongoing debate about content censorship. Companies must devise a set of principles and adhere to them to the best of their ability. As AI becomes more prevalent in monitoring and censoring online content there will have to be more transparency about the process and the algorithms will need to be adjusted following a review by the company. In other words, companies cannot prevent algorithmic biases, but they can monitor them and be transparent with the public about steps to make them better over time.
Amper is an AI music composer. Heliograf has written about 1000 news blurbs for WaPo. E-sports and e-bands are starting to sell out stadiums. Are there any human careers that you see as being automation-proof?
In theory, nearly any cognitive or physical task can be automated. We do not believe that people should be too worried, at least for the time being, about the implications of doing so because the costs to automate even basic tasks to the level of human performance is extremely high, and we are a good ways away from being technically capable of automating most tasks. However, AI should spark conversations about how we want to structure our society in the future and what it means to be human because AI will improve over time and become more dominant in the economy.
In Chapter 1 you briefly mention digital amnesia (outsourcing the responsibility of memorizing stuff to one’s devices). How else do you anticipate consumer devices will change us psychologically in the next few decades?
We could see a spike in schizophrenia because the immersive nature of virtual, augmented, and mixed reality that will increasingly blur the lines between reality and fantasy. In the 1960s there was a surge of interest in mind-expanding drugs such as psychedelics. However, someone ingesting LSD knew there was a time limit associated with the effects of the drug. These technologies do not end. Slowly, the real world could become less appealing and less real for heavy users of extended reality technology. This could affect relationships between other humans and increase the nature and commonality of mental illness. Also, as discussed in the book, we are already seeing people who cannot deal with risk in the real world. There have been several cases of animal mauling, cliff falls, and car crashes among individuals in search of the perfect “selfie”. This tendency to want to perfect our digital personas should be a topic of debate in schools and at the dinner table.
Ready Player One is the most recent sci-fi film positing the gradual elimination of corporeal existence through Virtual Reality. What do you think of the transcension hypothesis on Fermi’s paradox?
The idea that our consciousness can exist independently from our bodies has occurred throughout humanity’s history. It appears that our consciousness is a product of our own living bodies. No one knows if a person’s consciousness can exist after the body dies, but some have suggested that a person’s brain still functions for a few minutes after the body dies. It seems we need to worry about the impact of virtual reality on our physical bodies before it will be possible for us to transcend our bodies and exist on a digital plane. This is a great thought experiment, but there is not enough evidence to suggest that this is even remotely possible in the future.
What role will AI play in climate change?
AI will become an indispensable tool for helping to predict the impacts of climate change in the future. The field of “Climate Informatics” is already blossoming, harnessing AI to fundamentally transform weather forecasting (including the prediction of extreme events) and to improve our understanding of the effects of climate change. Much more thought and research needs to be devoted to exploring the linkages between the technology revolution and other important global trends, including demographic changes such as ageing and migration, climate change, and sustainable development, but AI should make a real difference in enhancing our general understanding of the impacts of these, and other, phenomena going forward.
Ten Ways the C-Suite Can Protect their Company against Cyberattack
Cyberattacks are one of the top 10 global risks of highest concern in the next decade, with an estimated price tag of $90 trillion if cybersecurity efforts do not keep pace with technological change. While there is abundant guidance in the cybersecurity community, the application of prescribed action continues to fall short of what is required to ensure effective defence against cyberattacks. The challenges created by accelerating technological innovation have reached new levels of complexity and scale – today responsibility for cybersecurity in organizations is no longer one Chief Security Officer’s job, it involves everyone.
The Cybersecurity Guide for Leaders in Today’s Digital World was developed by the World Economic Forum Centre for Cybersecurity and several of its partners to assist the growing number of C-suite executives responsible for setting and implementing the strategy and governance of cybersecurity and resilience. The guide bridges the gap between leaders with and without technical backgrounds. Following almost one year of research, it outlines 10 tenets that describe how cyber resilience in the digital age can be formed through effective leadership and design.
“With effective cyber-risk management, business executives can achieve smarter, faster and more connected futures, driving business growth,” said Georges De Moura, Head of Industry Solutions, Centre for Cybersecurity, World Economic Forum. “From the steps necessary to think more like a business leader and develop better standards of cyber hygiene, through to the essential elements of crisis management, the report offers an excellent cybersecurity playbook for leaders in public and private sectors.”
“Practicing good cybersecurity is everyone’s responsibility, even if you don’t have the word “security” in your job title,” said Paige H. Adams, Global Chief Information Security Officer, Zurich Insurance Group. “This report provides a practical guide with ten basic tenets for business leaders to incorporate into their company’s day-to-day operations. Diligent application of these tenets and making them a part of your corporate culture will go a long way toward reducing risk and increasing cyber resilience.”
“The recommendation to foster internal and external partnerships is one of the most important, in my view,” said Sir Rob Wainwright, Senior Cyber Partner, Deloitte. “The dynamic nature of the threat, not least in terms of how it reflects the recent growth of an integrated criminal economy, calls on us to build a better global architecture of cyber cooperation. Such cooperation should include more effective platforms for information sharing within and across industries, releasing the benefits of data integration and analytics to build better levels of threat awareness and response capability for all.”
The Ten Tenets
1. Think Like a Business Leader – Cybersecurity leaders are business leaders first and foremost. They have to position themselves, teams and operations as business enablers. Transforming cybersecurity from a support function into a business-enabling function requires a broader view and a stronger communication skill set than was required previously.
2. Foster Internal and External Partnerships – Cybersecurity is a team sport. Today, information security teams need to partner with many internal groups and develop a shared vision, objectives and KPIs to ensure that timelines are met while delivering a highly secure and usable product to customers.
3. Build and Practice Strong Cyber Hygiene – Five core security principles are crucial: a clear understanding of the data supply chain, a strong patching strategy, organization-wide authentication, a secure active directory of contacts, and encrypted critical business processes.
4. Protect Access to Mission-Critical Assets – Not all user access is created equal. It is essential to have strong processes and automated systems in place to ensure appropriate access rights and approval mechanisms.
5. Protect Your Email Domain Against Phishing – Email is the most common point of entry for cyber attackers, with the median company receiving over 90% of their detected malware via this channel. The guide highlights six ways to protect employees’ emails.
6. Apply a Zero-Trust Approach to Securing Your Supply Chain – The high velocity of new applications developed alongside the adoption of open source and cloud platforms is unprecedented. Security-by-design practices must be embedded in the full lifecycle of the project.
7. Prevent, Monitor and Respond to Cyber Threats – The question is not if, but when a significant breach will occur. How well a company manages this inevitability is ultimately critical. Threat intelligence teams should perform proactive hunts throughout the organization’s infrastructure and keep the detection teams up to date on the latest trends.
8. Develop and Practice a Comprehensive Crisis Management Plan – Many organizations focus primarily on how to prevent and defend while not focusing enough on institutionalizing the playbook of crisis management. The guide outlines 12 vital components any company’s crisis plan should incorporate.
9. Build a Robust Disaster Recovery Plan for Cyberattacks – A disaster recovery and continuity plan must be tailored to security incident scenarios to protect an organization from cyberattacks and to instruct on how to react in case of a data breach. Furthermore, it can reduce the amount of time it takes to identify breaches and restore critical services for the business.
10. Create a Culture of Cybersecurity – Keeping an organization secure is every employee’s job. Tailoring trainings, incentivizing employees, building elementary security knowledge and enforcing sanctions on repeat offenders could aid thedevelopment of a culture of cybersecurity.
In the Fourth Industrial Revolution, all businesses are undergoing transformative digitalization of their industries that will open new markets. Cybersecurity leaders need to take a stronger and more strategic leadership role. Inherent to this new role is the imperative to move beyond the role of compliance monitors and enforcers.
Moving First on AI Has Competitive Advantages and Risks
Financial institutions that implement AI early have the most to gain from its use, but also face the largest risks. The often-opaque nature of AI decisions and related concerns of algorithmic bias, fiduciary duty, uncertainty, and more have left implementation of the most cutting-edge AI uses at a standstill. However, a newly released report from the World Economic Forum, Navigating Uncharted Waters, shows how financial services firms and regulators can overcome these risks.
Using AI responsibly is about more than mitigating risks; its use in financial services presents an opportunity to raise the ethical bar for the financial system as a whole. It also offers financial services a competitive edge against their peers and new market entrants.
“AI offers financial services providers the opportunity to build on the trust their customers place in them to enhance access, improve customer outcomes and bolster market efficiency,” says Matthew Blake, Head of Financial Services, World Economic Forum. “This can offer competitive advantages to individual financial firms while also improving the broader financial system if implemented appropriately.”
Across several dimensions, AI introduces new complexities to age-old challenges in the financial services industry, and the governance frameworks of the past will not adequately address these new concerns.
Explaining AI decisions
Some forms of AI are not interpretable even by their creators, posing concerns for financial institutions and regulators who are unsure how to trust solutions they cannot understand or explain. This uncertainty has left the implementation of cutting-edge AI tools at a standstill. The Forum offers a solution: evolve past “one-size-fits-all” governance ideas to specific transparency requirements that consider the AI use case in question.
For example, it is important to clearly and simply explain why a customer was rejected for a loan, which can significantly impact their life. It is less important to explain a back-office function whose only objective is to convert scans of various documents to text. For the latter, accuracy is more important than transparency, as the ability of this AI application to create harm is limited.
Beyond “explainability”, the report explores new challenges surrounding bias and fairness, systemic risk, fiduciary duty, and collusion as they relate to the use of AI.
Bias and fairness
Algorithmic bias is another top concern for financial institutions, regulators and customers surrounding the use of AI in financial services. AI’s unique ability to rapidly process new and different types of data raise the concern that AI systems may develop unintended biases over time; combined with their opaque nature such biases could remain undetected. Despite these risks, AI also presents an opportunity to decrease unfair discrimination or exclusion, for example by analyzing alternative data that can be used to assess ‘thin file’ customers that traditional systems cannot understand due to a lack of information.
The widespread adoption of AI also has the potential to alter the dynamics of the interactions between human actors and machines in the financial system, creating new sources of systemic risk. As the volume and velocity of interactions grow through automated agents, emerging risks may become increasingly difficult to detect, spread across various financial institutions, Fintechs, large technology companies, and other market participants. These new dynamics will require supervisory authorities to reinvent themselves as hubs of system-wide intelligence, using AI themselves to supervise AI systems.
As AI systems take on an expanded set of tasks, they will increasingly interact with customers. As a result, fiduciary requirements to always act in the best interests of the customer may soon arise, raising the question if AI systems can be held “responsible” for their actions – and if not, who should be held accountable.
Given that AI systems can act autonomously, they may plausibly learn to engage in collusion without any instruction from their human creators, and perhaps even without any explicit, trackable communication. This challenges the traditional regulatory constructs for detecting and prosecuting collusion and may require a revisiting of the existing legal frameworks.
“Using AI in financial services will require an openness to new ways of safeguarding the ecosystem, different from the tools of the past,” says Rob Galaski, Global Leader, Banking & Capital Markets, Deloitte Consulting. “To accelerate the pace of AI adoption in the industry, institutions need to take the lead in developing and proposing new frameworks that address new challenges, working with regulators along the way.”
For each of the above described concerns, the report outlines the key underlying root causes of the issue and highlights the most pressing challenges, identifies how those challenges might be addressed through new tools and governance frameworks, and what opportunities might be unlocked by doing so.
The report was prepared in collaboration with Deloitte and follows five previous reports on financial innovation. The World Economic Forum will continue its work in Financial Services, with a particular focus on AI’s connections to other emerging technologies in its next phase of research through mid-2020.
US Blacklist of Chinese Surveillance Companies Creates Supply Chain Confusion
The United States Department of Commerce’s decision to blacklist 28 Chinese public safety organizations and commercial entities hit at some of China’s most dominant vendors within the security industry. Of the eight commercial entities added to the blacklist, six of them are some of China’s most successful digital forensics, facial recognition, and AI companies. However, the two surveillance manufacturers who made this blacklist could have a significant impact on the global market at large—Dahua and Hikvision.
Putting geopolitics aside, Dahua’s and Hikvision’s positions within the overall global digital surveillance market makes their blacklisting somewhat of a shock, with the immediate effects touching off significant questions among U.S. partners, end users, and supply chain partners.
Frost & Sullivan’s research finds that, currently, Hikvision and Dahua rank second and third in total global sales among the $20.48 billion global surveillance market but are fast-tracking to become the top two vendors among IP surveillance camera manufacturers. Their insurgent rise among IP surveillance camera providers came about due to both companies’ aggressive growth pipelines, significant product libraries of high-quality surveillance cameras and new imaging technologies, and low-cost pricing models that provide customers with higher levels of affordability.
This is also not the first time that these two vendors have found themselves in the crosshairs of the U.S. government. In 2018, the U.S. initiated a ban on the sale and use of Hikvision and Dahua camera equipment within government-owned facilities, including the Department of Defense, military bases, and government-owned buildings. However, the vague language of the ban made it difficult for end users to determine whether they were just banned from new purchases of Dahua or Hikvision cameras or if they needed to completely rip-and-replace existing equipment with another brand. Systems integrators, distributors, and even technology partners themselves remained unsure of how they should handle the ban’s implications, only serving to sow confusion among U.S. customers.
In addition to confusion over how end users in the government space were to proceed regarding their Hikvision and Dahua equipment came the realization that both companies held significant customer share among commercial companies throughout the U.S. market—so where was the ban’s line being drawn for these entities? Were they to comply or not? If so, how? Again, these questions have remained unanswered since 2018.
Hikvision and Dahua each have built a strong presence within the U.S. market, despite the 2018 ban. Both companies are seen as regular participants in industry tradeshows and events, and remain active among industry partners throughout the surveillance ecosystem. Both companies have also attempted to work with the U.S. government to alleviate security concerns and draw clearer guidelines for their sales and distribution partners throughout the country. They even established regional operations centers and headquarters in the country.
While blacklisting does send a clearer message to end users, integrators, and distributors—for sales and usage of these companies’ technologies—remedies for future actions still remain unclear. When it comes to legacy Hikvision and Dahua cameras, the onus appears to be on end users and integrators to decide whether rip-and-replace strategies are the best way to comply with government rulings or to just leave the solutions in place and hope for the best.
As far as broader global impacts of this action, these will remain to be seen. While the 2018 ban did bring about talks of similar bans in other regions, none of these bans ever materialized. Dahua and Hikvision maintained their strong market positioning, even achieving higher-than-average growth rates in the past year. Blacklisting does send a stronger message to global regulators though, so market participants outside the U.S. will just have to adopt a wait-and-see posture to see how, if at all, they may need to prepare their own surveillance equipment supply chains for changes to come.
Free travel passes to enable young people to discover Europe
Thanks to backing by MEPs, 50,000 18-year-olds have enjoyed the chance to travel in the EU for free since its...
The Luxury Collection Makes A Landmark Debut In Qatar
The Luxury Collection today announced the opening of Al Messila, A Luxury Collection Resort & Spa in Doha, marking the...
Turkey begins the return of ISIS fighters to Europe
Today, Turkey started sending ISIS fighters back to Europe, as it promised last week. Europe needs to take responsibility for...
Alibaba on Platform Economy
Alibaba on national mobilization of entrepreneurialism on platform economy: today, Alibaba sold $38 Billion within 24 hours: Around the world,...
Eastern Partnership Countries: Buffer Zone or Platform for Dialogue?
2019 marks the 10 th anniversary of the Eastern Partnership, a political initiative the EU launched in 2009 for developing...
ADB to Help Improve Rural Water Supply, Sanitation in Kyrgyz Republic
The Asian Development Bank (ADB) has approved a $27.4 million financing package to provide safe and reliable water supply and...
The efficiency of German contribution in the Afghan peace process
Germany is heavily involved in the afghan affairs since 9.11.2001; the country has brought in to being the modern Afghanistan...
Terrorism3 days ago
The Rise OF ISIS and its Aftermath in Afghanistan
Tech News3 days ago
Building Emerging Technology Governance Key to Realizing Saudi Arabia’s Vision 2030
Europe3 days ago
30 years after 9/11: How many Germanies should Europe have?
Reports3 days ago
Africa’s energy future matters for the world
Energy News3 days ago
IRENA Concludes its Eighteenth Council
Americas2 days ago
Leftists make a comeback in Latin America
Middle East2 days ago
The narrative approach of Lebanon’s uprising
EU Politics2 days ago
EU-Singapore agreement to enter into force on 21 November 2019