Connect with us

Science & Technology

Artificial Intelligence: A Blessing or a Threat for Humanity?

Published

on

In August 2018, Czech Technical University in Prague simultaneously hosted several conferences on AI-related topics: human-level AI, artificial general intelligence, biologically inspired cognitive architectures, and neural-symbolic integration technology. Reports were presented by prominent experts representing global leaders in artificial intelligence: Microsoft, Facebook, DARPA, MIT and Good AI. The reports described the current status of AI developments, identified the problems facing society that have yet to be resolved, and highlighted the threats arising from the further development of this technology. In this review, we will attempt to briefly identify the main problems and threats, as well as the possible ways to counter these threats.

To begin with, let us provide definitions for some of the terms that are commonly used in conjunction with AI in various contexts: weak, or specialized, AI; autonomous AI; adaptive AI; artificial general intelligence (AGI); strong AI; human-level AI; and super-human AI.

Weak, or specialized, AI is represented by all existing solutions without exception and implies the automated solution of one specific task, be it a game of Go or face recognition with CCTV footage. Such systems are incapable of independent learning for the purpose of solving other problems: they can only be reprogrammed by humans to do so.

Autonomous AI implies a system’s ability to function for protracted periods of time without the intervention of a human operator. This could be a solar-powered UAV performing a multi-day flight from Champs-Elysees in Paris to Moscow’s Red Square or back, independently selecting its route and recharging stops while avoiding all sorts of obstacles.

Adaptive AI implies the system’s ability to adapt to new situations and obtain knowledge that it did not possess at the time of its creation. For example, a system originally tasked with conducting conversations in Russian could independently learn new languages and apply this knowledge in conversation if it found itself in a new language environment or if it deliberately studied educational materials on these new languages.

Artificial general intelligence implies adaptability of such a high level that the corresponding system could, given the appropriate training, be used in a wide variety of activities. New knowledge could either be self-taught or learned with the help of an instructor. It is in this same sense that the notion of strong AI is often used in opposition to weak or specialized AI.

Human-level AI implies a level of adaptability comparable to that of a human being, meaning that the system is capable of mastering the same skills as a human and within comparable periods of time.

Super-human AI implies even greater adaptability and learning speeds, allowing the system to masker the knowledge and skills that humans would never be able to.

Fundamental Problems Associated with Creating a Strong AI

Despite the multitude of advances in neuroscience, we still do not know exactly how natural intelligence works. For this same reason, we do not know for sure how to create artificial intelligence (AI). There are a number of known problems that need to be resolved, as well as differing opinions as to how these problems should be prioritized. For example, Ben Goertzel, who heads the OpenCog and SingularityNET, open-source international projects to create artificial intelligence, believes that all the requisite technology for creating an artificial general intelligence has already been developed, and that the only thing necessary is to combine them in a way that would ensure the necessary synergy. Other experts are more sceptical, pointing out that many of the problems that we will discuss below need to be resolved first. Also, expert estimates for when a strong AI may be created vary greatly, from ten or so years to several decades from now.

On the other hand, the emergence of a strong AI is logical in the framework of the general process of evolution as the emergence of molecules from atoms and cells from molecules, the creation of the central nervous system from specialized cells, the emergence of social structure, the development of speech and writing systems and, ultimately, the nascence of information technology. Valentin Turchin demonstrates the logic behind the increasing complexity of information structures and organizational mechanisms in the process of evolution. Unless humanity perishes first, this evolution will be inevitable and will, in the long run, rescue humankind, as only non-biological lifeforms will be able to survive the inevitable end of the Solar System and preserve our civilization’s information code in the Universe.

It is important to realize that the creation of a strong AI does not necessarily require the understanding of how the natural intelligence works, just as the development of a rocket does not necessarily require understanding how a bird flies. Such an AI will certainly be created, sooner or later, in one way or another, and perhaps even in several different ways.

Most experts identify the following fundamental problems that need to be solved before a general or strong AI can be created:

Few-shot learning: systems need to be developed that can learn with the use of a small amount of materials, in contrast to the current deep-learning systems, which require massive amounts of specifically prepared learning materials.

Strong generalization: creating problem recognition technologies allowing for recognizing objects in situations that differ from those in which they were encountered in the learning materials.

Generative learning models: developing learning technologies in which the objects to be memorized are not the features of the object to be recognised, but rather the principles of its formation. This would help in addressing the more profound characteristics of objects, providing for faster learning and stronger generalization.

Structured prediction and learning: developing learning technologies based on the representation of learning objects as multi-layered hierarchical structures, with lower-level elements defining higher level ones. This could prove an alternative solution to the problems of fast learning and strong generalization.

Solving the problem of catastrophic forgetting, which is pertinent to the majority of existing systems: a system originally trained with the use of one class of object and then additionally trained to recognize a new class of objects loses the ability to recognize objects of the original class.

Achieving an incremental learning ability, which implies a system’s ability to gradually accumulate knowledge and perfect its skills without losing the previously obtained knowledge, but rather obtaining new knowledge, with regard to systems intended for interaction in natural languages. Ideally, such a system should pass the so-called Baby Turing Test by demonstrating its ability to gradually master a language from the baby level to the adult level.

Solving the consciousness problem, i.e. coming up with a proven working model for conscious behaviour that ensures effective prediction and deliberate behaviour through the formation of an “internal worldview,” which could be used for seeking optimum behavioural strategies to achieve goals without actually interacting with the real world. This would significantly improve security and the testing of hypotheses while increasing the speed and energy efficiency of such checks, thus enabling a live or artificial system to learn independently within the “virtual reality” of its own consciousness. There are two applied sides to solving the consciousness problem. On the one hand, creating conscious AI systems would increase their efficiency dramatically. On the other hand, such systems would come with both additional risks and ethical problems, seeing as they could, at some point, be equated to the level of self-awareness of human beings, with the ensuing legal consequences.

Potential AI-Related Threats

Even the emergence of autonomous or adaptive AI systems, let alone general or strong AI, is associated with several threats of varying degrees of severity that are relevant today.

The first threat to humans may not necessarily be presented by a strong, general, human-level or super-human AI, as it would be enough to have an autonomous system capable of processing massive amounts of data at high speeds. Such a system could be used as the basis for so-called lethal autonomous weapons systems (LAWS), the simplest example being drone assassins (3D-printed in large batches or in small numbers).

Second, a threat could be posed by a state (a potential adversary) gaining access to weapons system based on more adaptive, autonomous and general AI with improved reaction times and better predictive ability.

Third, a threat for the entire world would be a situation based on the previous threat, in which several states would enter a new round of the arms race, perfecting the intelligence levels of autonomous weapon systems, as Stanislaw Lem predicted several decades ago.

Fourth, a threat to any party would be presented by any intellectual system (not necessarily a combat system, but one that could have industrial or domestic applications too) with enough autonomy and adaptivity to be capable not only of deliberate activity, but also of autonomous conscious target-setting, which could run counter to the individual and collective goals of humans. Such a system would have far more opportunities to achieve these goals due to its higher operating speeds, greater information processing performance and better predictive ability. Unfortunately, humanity has not yet fully researched or even grasped the scale of this particular threat.

Fifth, society is facing a threat in the form of the transition to a new level in the development of production relations in the capitalist (or totalitarian) society, in which a minority comes to control material production and excludes an overwhelming majority of the population from this sector thanks to ever-growing automation. This may result in greater social stratification, the reduced effectiveness of “social elevators” and an increase in the numbers of people made redundant, with adverse social consequences.

Finally, another potential threat to humanity in general is the increasing autonomy of global data processing, information distribution and decision-making systems growing, since information distribution speeds within such systems, and the scale of their interactions, could result in social phenomena that cannot be predicted based on prior experience and the existing models. For example, the social credit system currently being introduced in China is a unique experiment of truly civilizational scale that could have unpredictable consequences.

The problems of controlling artificial intelligence systems are currently associated, among other things, with the closed nature of the existing applications, which are based on “deep neural networks.” Such applications do not make it possible to validate the correctness of decisions prior to implementation, nor do they allow for an analysis of the solution provided by the machine after the fact. This phenomenon is being addressed by the new science, which explores explainable artificial intelligence (XAI). The process is aided by a renewed interest in integrating the associative (neural) and symbolic (logic-based) approaches to the problem.

Ways to Counter the Threats

It appears absolutely necessary to take the following measures in order to prevent catastrophic scenarios associated with the further development and application of AI technologies.

An international ban on LAWS, as well as the development and introduction of international measures to enforce such a ban.

Governmental backing for research into the aforementioned problems (into “explainable AI ” in particular), the integration of different approaches, and studying the principles of creating target-setting mechanisms for the purpose of developing effective programming and control tools for intellectual systems. Such programming should be based on values rather than rules, and it is targets that need to be controlled, not actions.

Democratizing access to AI technologies and methods, including through re-investing profits from the introduction of intellectual systems into the mass teaching of computing and cognitive technologies, as well as creating open-source AI solutions and devising measures to stimulate existing “closed” AI systems to open their source codes. For example, the Aigents project is aimed at creating AI personal agents for mass users that would operate autonomously and be immune to centralized manipulations.

Intergovernmental regulation of the openness of AI algorithms, operating protocols for data processing and decision-making systems, including the possibility of independent audits by international structures, national agencies and individuals. One initiative in this sense is to create the SingularityNET open-source platform and ecosystem for AI applications.

First published in our partner RIAC

Continue Reading
Comments

Science & Technology

How as strategist we can compete with the sentient Artificial intelligence?

Published

on

Universe is made up of humans, stars, galaxies, milky ways, black holes other objects linked and connected with each other. Everything in the universe has its level of mechanisms and complexities. Humans are very complex creatures man-made objects are more complex and difficult to understand. With the passage of time human beings are more evolved and become more advanced technologically. Human inventions are reached to that level of advancement, which initiates a competition between machines and humans, itself. Humans are the most intelligent mortals on the earth but now human are being challenged by the intelligence (artificial intelligence), which was invented as helping hand for humans to increase efficiency. Here it is important to question that whether human’s intelligence was not enough to survive in the fast growing technological world? Or the man-made intelligence has reached to its peak so that humans come in competition with machines and human intelligence is challenged by the artificial intelligence? If there is competition, then how strategists could compete with artificial intelligence? To answer these questions we first need to know what artificial intelligence actually is.

Artificial intelligence was presented by John McCarthy in 1955; he characterized computerized reasoning in 1956 at Dartmouth Conference, the main counterfeit consciousness meeting that: Every fragment of learning or another element of insight can on a basic level be so unequivocally depicted that a machine can be made to empower it. An endeavor will be made to learn how to influence machines to exploit vernacular, mount deliberations and ideas, take care of sort of issues now held for people, and enhance themselves. There are seven main features of artificial intelligence as follows:-

“Simulating higher functions of brain

Programming a computer to use general language

Arrangement of hypothetical neurons in a manner  so that they can form concept

Way to determine and measure problem complexity

Self-improvement

Abstraction: it is defined as quality of dealing with ideas , not with events

Creativity and randomness”

Another definition is given by Elaine rich who expressed that counterfeit consciousness is tied in with making computer to do such thing which are presently being finished by human. He said that each computer is artificial intelligence framework. Jack Copland expressed that critical elements of artificial intelligence are speculation discovering that empowers the student to perform in the circumstance that are beforehand experienced. At that point its thinking, to reason is to make inference fittingly, critical thinking implied that by giving information it can finish up comes about lastly trickiness intends to break down a checked situation and investigating the highlights and connection between the articles and self-driving autos are its case.

Artificial intelligence is very common in the developed nations and developing nations are using artificial intelligence according to resources. Now question is that how artificial intelligence is being utilized in the above mentioned fields? Use of AI will be elaborated with help of phenomenon and examples of related fields for better understanding.

World is being more advanced and technologies are improving as well. In this situation states become conscious about their security. At this point states are involving AI approaches in their defense systems and some states are already using artificially integrated technologies. On 11 May 2017, Dan Coats, the executive of US National Intelligence, conveyed declaration to the US Congress on his yearly Worldwide Threat Assessment. In the openly discharged archive, he said that (AI) is progressing computational abilities that advantage the economy, yet those advances likewise empower new military capacities for our enemies’. In the meantime, the US Department of Defense (DOD) is taking a shot at such frameworks. Undertaking Maven, for example, otherwise called the Algorithmic Warfare Cross-Functional Team (AWCFT), is intended to quicken the incorporation of huge information, machine learning and AI into US military capacities. While the underlying focal point of AWCFT is on computer vision calculations for protest identification and characterization, it will unite all current calculation based-innovation activities related with US resistance knowledge. Command, control, communications, computers, intelligence, surveillance and reconnaissance (C4ISR) are achieving new statures of proficiency that empower information accumulation and preparing at exceptional scale and speed. At the point when the example acknowledgment calculations being produced in China, Russia, the UK, the US and somewhere else are combined with exact weapons frameworks, they will additionally expand the strategic preferred standpoint of unmanned elevated vehicles (UAVs) and other remotely worked stages. China’s resistance part has made achievements in UAV ‘swarming’ innovation, including an exhibition of 1,000 EHang UAVs flying in arrangement at the Guangzhou flying demonstration in February 2017. Potential situations could incorporate contending UAV swarms attempting to hinder each other’s C4ISR arrange, while at the same time drawing in dynamic targets.

Humans are the most intelligent creatures that created an artificial intelligence technology. The technology we human introduced is more intelligent than us and works fastest than humans. So here is big question marks that can humans compete with the artificial intelligence in near future. Now days it seems that AI is replacing humans in every field of life so what will be condition after decades or two. There is an alarming competition started between the human and AI. AI was called as demon by Tesla Elon Musk. A well physicist Stephen Hawking also stated that in future artificial intelligence could be proved as a bad omen for humanity. But signs of all this clear and we can clearly see the replacement of humans. We human are somehow losing the competition. But it is also clear that a creator can be destructor also. So as strategist we must have the counter strategies and second plans to overcome the competition. The edge human have over AI is the ability to think and we generate this in AI integrated techs so we must set the level for this. Otherwise this hazard could be a great threat in future and humanity could possibly be an extinct being.

Continue Reading

Science & Technology

What is more disruptive with the AI: Its dark potentials or our (anti-Intellectual) Ignorance?

Anis H. Bajrektarevic

Published

on

Throughout the most of human evolution both progress as well as its horizontal transmission was extremely slow, occasional and tedious a process. Well into the classic period of Alexander the Macedonian and his glorious Alexandrian library, the speed of our knowledge transfers – however moderate, analogue and conservative – was still always surpassing snaillike cycles of our breakthroughs.

When our sporadic breakthroughs finally turned to be faster than the velocity of their infrequent transmissions – that marked a point of our departure. Simply, our civilizations started to significantly differentiate from each other in their respective techno-agrarian, politico-military, ethno-religious and ideological, and economic setups. In the eve of grand discoveries, that very event transformed wars and famine from the low-impact and local, into the bigger and cross-continental.

Faster cycles of technological breakthroughs, patents and discoveries than their own transfers, primarily occurred on the Old continent. That occurrence, with all its reorganizational effects, radically reconfigured societies. It finally marked a birth of mighty European empires, their (liberal) schools and overall, lasting triumph of the western civilization.

Act

For the past few centuries, we lived fear but dreamt hope – all for the sake of modern times. From WWI to www. Is this modernity of internet age, with all the suddenly reviled breakthroughs and their instant transmission, now harboring us in a bay of fairness, harmony and overall reconciliation? Was and will our history ever be on holiday? Thus, has our world ever been more than an idea? Shall we stop short at the Kantian word – a moral definition of imagined future, or continue to the Hobbesian realities and grasp for an objective, geopolitical definition of our common tomorrow?

The Agrarian age inevitably brought up the question of economic redistribution. Industrial age culminated on the question of political participation. The AI (Quantum physics, Nanorobotics and Bioinformatics) brings a new, yet underreported challenge: Human (physical and mental) powers might – far and wide, and rather soon – become obsolete. If/when so, a question of human irrelevance is next to ask.

Why is the AI like no technology ever before? Why re-visiting and re-thing spirituality matters …

If you believe that the above is yet another philosophical melodrama, an anemically played alarmism, mind this:

We will soon have to redefine what we consider as a life itself.

Less than a month ago (January 2020), the successful trials have been completed. Border between organic and inorganic, intrinsic and artificial is downed forever. The AI has it now all-in: quantum physics (along with quantum computing), nanorobotics, bioinformatics and organic tissue tailoring. Synthesis of all that is usually referred as xenobots(sorts of living robots) – biodegradable symbiotic nanorobots that exclusively rely on evolutionary (self-navigable) algorithms. 

React

Although life is to be lived forward (with no backward looking), human retrospection is a biggest reservoir of insights. Of what makes us human.

Hence, what does our history of technology in relation to human development tell us so far?

Elaborating on a well-known argument of ‘defensive modernization’ of Fukuyama, it is evident that throughout the entire human history a technological drive was aimed to satisfy the security (and control) objective. It was rarely (if at all) driven by a desire to (gain a knowledge outside of convention, in order to) ease human existence, and to enhance human emancipation and liberation of societies at large. Thus, unless operationalized by the system, both intellectualism (human autonomy, mastery and purpose), and technological breakthroughs were traditionally felt and perceived as a threat. As a problem, not a solution. 

Ok. But what has brought us (under) the AI today?

It was our acceptance. Of course, manufactured.

All cyber-social networks and related search engines are far away from what they are portrayed to be: a decentralized but unified intelligence, attracted by gravity of quality rather than navigated by force of a specific locality. (These networks were not introduced to promote and emancipate other cultures but to maintain and further strengthen supremacy of the dominant one.)

In no way they correspond with a neuroplasticity of physics of our consciousness. They only offer an answer to our anxieties – in which the fear from free time is the largest, since free time coupled with silence is our gate to creativity and self-reflection. In fact, the cyber-tools of these data-sponges primarily serve the predictability, efficiency, calculability and control purpose, and only then they serve everything else – as to be e.g. user-friendly and en mass service attractive.

To observe the new corrosive dynamics of social phenomenology between manipulative fetishization (probability) and self-trivialization (possibility), the cyber-social platforms – these dustbins of human empathy in the muddy suburbs of consciousness – are particularly interesting.

This is how the human presence eliminating technologies have been introduced to and accepted by us.

Packed

How did we reflect – in our past – on new social dynamics created by the deployment of new technologies?

Aegean theater of the Antique Greece was the place of astonishing revelations and intellectual excellence – a remarkable density and proximity, not surpassed up to our age. All we know about science, philosophy, sports, arts, culture and entertainment, stars and earth has been postulated, explored and examined then and there. Simply, it was a time and place of triumph of human consciousness, pure reasoning and sparkling thought. However, neither Euclid, Anaximander, Heraclites, Hippocrates (both of Chios, and of Cos), Socrates, Archimedes, Ptolemy, Democritus, Plato, Pythagoras, Diogenes, Aristotle, Empedocles, Conon, Eratosthenes nor any of dozens of other brilliant ancient Greek minds did ever refer by a word, by a single sentence to something which was their everyday life, something they saw literally on every corner along their entire lives. It was an immoral, unjust, notoriously brutal and oppressive slavery system that powered the Antique state. (Slaves have not been even attributed as humans, but rather as the ‘phonic tools/tools able to speak’.) This myopia, this absence of critical reference on the obvious and omnipresent is a historic message – highly disturbing, self-telling and quite a warning. 

So, finally

Why is the AI like no technology ever before?

Ask google, you see that I am busy messaging right now!

Continue Reading

Science & Technology

They promised us Martian colonies; instead, we got Facebook

Dr. Andrea Galli

Published

on

The advent of the digitization changes the values of the society, especially as an apparatus of power, not as a real benefit to humanity.

Everyone talks about digitalization. When I browse the science and technology section in newspapers, I mostly find articles on smartphones, clouds, and social media. And I realize that the entertainment industry has become the technological progress engine nowadays.

For purposes of illustration: by 2019, California invested around 75 billion USD in venture capital, more than a half of it across the US, distributed between more than 2,300 startups. That is substantial. But if you take a closer look, the picture changes. More than a half of that goes into software development, with only about 20 percent allocated for life sciences and almost nothing for significant engineering. The buzz words are always the same: “cloud” something, “smart” something, “AI” something, “blockchain” something. In the meantime, the more aloof the claim, the higher the probability of funding, even if the real innovative benefit to humanity is negligible.

The situation is not better in other technology centers, including those in Europe. So in the end, we have cases like Theranos which turn out to be fraud machines on a large scale. We have the Binary Options scam startups in Tel Aviv which plundered the savings of people from half a continent. Or Wirecard in Germany suspected of operating one of the largest cloud platforms for money laundering.

If not based on a robust fraudulent scheme, the business models of such “cloud” something, “smart” something, “AI” something, “blockchain” something companies are actually ailing right from the start. Most users are not willing to pay money to use their platforms. That’s why the tech giants have come up with an idea: they pretend to believe in the dream of free use –and users pay with their private data.

Just imagine that we are back in the year 1990 when sending letters and making phone calls were still relatively expensive matters. The representative of a new telecommunications company stands at your door and says: “We have a super offer for you. You will never have to pay for long-distance calls again, we will also deliver every letter for free. But we will record everything you say or write. Furthermore, we reserve the right to analyze this information, share it with others, sell it, and besides–if we don’t like specific content –to delete it.” It’s clear what you would have said or done at the time to such a representative.

Today, we embrace the digital monitoring of society because we see this as a new normality. The sin was committed in 2004 when Google went public after the dotcom bubble burst. Even in the 1990s, search engines and social networks were still underpinned by the best intentions. They were meant to connect people, help share knowledge, create common grounds, and make money. It was about indexing websites while preserving the informational self-determination of the individual. Then it became clear that little money could be earned that way. And so began the indexing –the profiling – of users, i.e., people of flesh and blood.

The new tech companies collect all the data about our searching, writing, reading, walking, breathing, eating, paying, liking, loving, disliking, laughing, and purchasing behavior. This is called the Big Data. They can use that information to track us and sell us things. Or to monitor our thoughts and sell us lies. Or to surveil our opinion and manipulate us. Or they can resell the data and the analyzed profiles to third parties, including governmental organizations and political parties.

Artificial Intelligence plays a dominant role in this user profiling, monitoring, and surveillance business, since it delivers the techniques for it. Some computer scientists involved in Artificial Intelligence development enthusiastically say: “When computational learning ability meets large amounts of data, the quantity should one day turn into quality.” In other words, intelligence that learns on its own is actually created. Maybe so, but we are a long way from that.

Neural networks in AI remain classification and correlation machines. They detect patterns in data, for example, faces on billions of pictures. From such patterns, findings can be derived which, in turn, can be interpreted and used by humans. Yet, first of all, this has nothing to do with intelligence in the genuine sense of the word. It has nothing to do with the ability of an organism to independently create a model and to make decisions to adapt and thus to survive on its basis. If still more computing power meets more data, then we get better correlations, better pattern recognition, but not intelligence.

In 2012, the world’s fastest supercomputer was running at the Lawrence Livermore National Lab. It simulated a neural network with the complexity of a human brain with 530 billion neurons and 137 trillion synapses. The machine required eight megawatts of power but was 1,500 times slower than a human brain. Consequently, it would need 12 gigawatts to simulate an average human brain in real-time (let us say, that of an acumen of Omar Simpson). That is the power of about 15 to 20 nuclear reactors or 100 coal-fired power plants. Greta Thunberg will be glad to hear it! We will never, ever create artificial intelligence with the existing computer architectures.

The tech giants, from Facebook to Google, and the technological centers pursuing the buzz of the “cloud” something, “smart” something, “AI” something, “blockchain” something are making our lives difficult with their practices. The Silicon Valley and other comparable innovation centers promised us Martian and Moon colonies. They promised us luxurious interplanetary vessels populated with androids to do our housework and sexy cyborgs to entertain us with brilliant conversations. Instead, we received smartphones with preinstalled Facebook apps or other similar social media platforms. And in certain cases we got industrial robots that are taking away our jobs. Or algorithms running on supercomputers that automatically invest our hard-earned pensions into the technological innovation of the “somethings”. Or computational propaganda bots that trigger chain reactions of posts in social networks by publishing messages of ideology or hate or investment advice. The list of innovations that we got is long.

The upside is that if Facebook would vanish from the face of the earth tomorrow, what would be the consequences for humanity? None! Except for the tears of loneliness flowing on empty screens among social media addicts. But one thing is clear: the advent of the digitization changes the values of the society and the quality of life as much as the advent of plastic did. Its long-term benefits are ambiguous. The responsibility for these innovations is enormous, certainly, not as a “technical means”, but as an instrument of power and power itself .It is through the culture of digitization that the spirit of a new power will manifest itself. There is no doubt (you can already judge by the early signs today) that digitization will be authoritarian and repressive like no other culture in the world.

Continue Reading

Publications

Latest

Newsdesk2 hours ago

La Paz and Santa Cruz de la Sierra Develop Urban Resilience with World Bank Support

The World Bank Board of Directors approved two loans totaling US$70 million today to support the cities of La Paz...

Newsdesk4 hours ago

Haitian leaders urged to end political impasse

Leaders in Haiti must step up and end the political impasse between President Jovenel Moïse and a surging opposition movement that has paralyzed the island nation since July 2018, the top UN...

Africa5 hours ago

Escalating Burkina Faso violence brings wider Sahel displacement emergency into focus

Deadly attacks on villages in Burkina Faso have forced 150,000 people to flee in just the last three weeks, the...

Newsdesk8 hours ago

Safer Roads to Add Over $1 Trillion to South Asian Economies

South Asia’s eastern subregion, comprising Bangladesh, Bhutan, India, and Nepal, needs to invest an estimated extra $118 billion in road...

Americas10 hours ago

The Overriding Strategic Threat: Donald Trump, American “Mass” And Nuclear War

“The mass crushes out the insight and reflection that are still possible with the individual, and this necessarily leads to...

EU Politics12 hours ago

Shaping Europe’s digital future: What you need to know

The EU is pursuing a digital strategy that builds on our successful history of technology, innovation and ingenuity, vested in...

Environment14 hours ago

Afghan youth are helping shape the country’s first national environmental policy

Over 40 years of conflict and insecurity have taken their toll on Afghanistan in countless ways. Amongst the casualties, nature....

Trending