Connect with us

Science & Technology

WEF Launches Tech for Integrity Platform in Anti-Corruption Drive

Published

on

The World Economic Forum’s Partnering Against Corruption Initiative (PACI) has launched a Tech for Integrity platform to accelerate anti-corruption efforts and reduce the time needed to make tangible impact. This digital platform will leverage tech innovators and partnerships with multistakeholders, including Citi, the Inter-American Development Bank, Transparency International and others to rebuild trust and integrity globally.

Corruption impedes economic growth, contributes to social inequality and obstructs innovation. As such, global leaders of business and government are looking for better ways to improve integrity and transparency across sectors. Technology has emerged as the greatest ally of transparency and a critical tool against corruption.

Four nascent technologies in particular – blockchain, big data analytics, artificial intelligence and e-governance – hold significant promise for businesses and governments to safeguard primary points of vulnerability. Given the early stages of these technologies, the most appropriate tools remain difficult to identify and source by the governments, businesses and civil society actors that need them. Understanding the role of technology and connecting leaders of government and business with the resources they need to promote integrity has a huge potential to create downstream benefits for every aspect of society.

The T4I platform, emerging out of PACI’s Future of Trust and Integrity project, is aimed at providing technological solutions faced by stakeholders addressing corruption. Last year, Citi, in collaboration with public and private sector allies, created and launched the Tech for Integrity Challenge to source innovative solutions to fight corruption. Citi, Mastercard, Microsoft, IBM, PwC, Clifford Chance and Let’s Talk Payments successfully sourced 1,000 registrations, and together with 80 other allies selected 96 finalists to present their ideas at six Demo Days around the world.

“Citi is excited that PACI will continue the T4I mission to solve issues of integrity, continue the collaboration of this fantastic ecosystem and build further momentum in adopting these solutions,” said Julie Monaco, Global Head of the Public Sector Group in Citi’s Corporate and Investment Banking division. “We look forward to working with WEF and PACI to further develop and implement solutions that will make a global impact.”

PACI’s next generation of this platform provides three intersecting spaces to drive thought leadership, networks and increase impact:

Knowledge accelerator – Driven by public-private cooperation, the knowledge accelerator is a dynamic digital repository of information that aims to foster communication and collaboration to deepen understanding of how technologies can better address corruption.

Synergy lab –The synergy lab will help leaders of government, business and civil society identify their specific needs and connect those leaders with innovators providing the most appropriate technology solutions to address those needs.

Impact initiatives – In concert with international organizations, the private sector and civil society, the impact initiatives will share best practices on available solutions, evaluate existing implementation projects, as well as directly engage with such projects to effectively demonstrate how to build solutions into government and business processes to promote trust and integrity.

“Technology is becoming one of our greatest allies in the effort to disrupt corruption,” said Luis Alberto Moreno. President, Inter-American Development Bank. “Coupled with political resolve, the digital revolution can help us reach our goals of greater transparency and accountability in government much faster and more efficiently than we thought possible. The IDB will continue to support this important and timely initiative.”

“New technologies offer great opportunities to enhance participation, access to information and the possibility to monitor public policies. Nevertheless, it is misguided to believe that technology will solve all corruption problems. We have to be careful not to introduce tools that might strengthen the risks of anonymity which are essential in corrupt deals,” said Delia Ferreira Rubio, Chair of Transparency International, a key partner with the PACI project responsible for the new platform. “The Tech for Integrity platform provides a space to debate important issues and share technologies to increase transparency.”

“The Fourth Industrial Revolution can only deliver on its potential if leaders know when, where and how to use the tools emerging from it,” said Olivier Schwab, Managing Director and Head of Business Engagement at the World Economic Forum. “PACI’s Tech for Integrity platform connects innovators and implementers to foster a better understanding of the drivers of trust and how to utilize the latest technologies to rebuild integrity.”

Continue Reading
Comments

Science & Technology

Artificial intelligence and moral issues: AI between war and self-consciousness

Published

on

At the beginning of 2018, the number of mobile phones in use surpassed the number of humans on the planet, reaching 8 billion. In theory, each of these devices is connected to two billion computers, which are themselves networked. Given the incredible amount of data involved in this type of use, and considering that the computer network is in constant contact and growing, is it possible that mankind has already created a massive brain? An artificial intelligence that has taken on an identity of its own?

The field of robotics is constantly evolving and continues to make strides. It is therefore clear that sooner or later we shall move from artificial intelligence to super-intelligence, i.e. a being on this planet that is smarter than we are and will soon not be any smarter. It will not be pleasant when artificial intelligence with its knowledge and intellectual abilities corners the human being, surpassing flesh and blood people in any field of knowledge. It will be a pivotal moment that will radically change world history – as for now our existence is justified by the fact that we are at the top of the food chain, but the moment when an entity is self-created that does not need to feed itself on pasta and meat, what will we exist for if that entity only needs solar energy to perpetuate itself indefinitely?

If sooner or later we are to be replaced by artificial intelligence, we must begin to prepare ourselves psychologically. Portland, Oregon, April 7, 2016: the US Defence Advanced Research Projects Agency (DARPA) launched the prototype of the unmanned anti-submarine vessel Sea Hunter, marking the beginning of a new era. Unlike the Predator and Air Force drones, this vessel does not need a remote operator and is built to be able to navigate on its own while avoiding all kinds of obstacles at sea. It has enough fuel to withstand up to three months at sea and is very silent. It also transmits encrypted information to Defence Intelligence Services. When the US Department of Defence says that an unmanned submarine would not be launched without remote control, they are telling the truth. But there is more to consider, i.e. that Russia has developed a remotely piloted submarine with a nuclear weapon. This means that between 5 and 15 years will elapse before the US Defence can respond to a remotely piloted submarine with a nuclear weapon on board.

It has always been said that the war drone replaces the flesh and blood soldier, who becomes a remote “playstation” operator. Hence the idea of the drone as a substitute for the human soldier, who would be guaranteed total safety and security, so as to avoid unnecessary dangers. It had been forgotten, however, that remote control could be intercepted by the enemy and change targets by striking its own army. At that point, however, drones would have to be made completely autonomous. Such a drone would be a killing machine that would wipe out entire armies, which is the reason why care should be taken to avoid their proliferation on battlefields. Any kind of accident, a fire or even a minor malfunction would trigger a “madness” mechanism that would cause the machine to kill anyone. Developing killer robots is possible. Facial recognition technology has made great strides and artificial intelligence can recognise faces and detect targets. In fact, drones are already being used to detect and target individuals, based on facial features: they kill and injure.

The application of artificial intelligence to military technology will change warfare forever. It is possible for the army’s autonomous machines to take wrong decisions, thus reaping tens of thousands of casualties among friends, enemies and defenceless civilians. What if they even go so far as to ignore instructions? If so, if autonomous, self-driving killing machines independent of human commands are designed, could we be facing a violent fate of extinction for the human race?

While many experts and scholars agree that humans will be the architects of their own violent downfall first and destruction later, others believe that the advancement of artificial intelligence may be the key to mankind salvation.

Los Angeles, May 2018: at the University of California, Professor Veronica Santos was working on the development of a project to create increasingly human-like robots capable of sensing physical contact and reacting to it. She was also testing different ways of robot tactile sensitivity. Combining all this with artificial intelligence, there may one day be a humanoid robot capable of exploring the space as far as Mars. Humanoid robots are increasingly a reality, ranging from the field of neuroprosthetics to machines for colonising celestial bodies.

Although the use of humanoid robots is a rather controversial topic, this sector has the merit of having great prospects, especially for those who intend to invest in the field. Funding development projects could prove useful in the creation of artificial human beings that are practically impossible to distinguish from flesh and blood individuals.

These humanoids, however, could conceivably express desires and feel pain, as well as display a wide range of feelings and emotions. It is actually well-known that we do not know what an emotion really is. Hence would we really be able to create an artificial emotion, or would we make fatal errors in software processing? If a robot can distinguish between good and evil and know suffering, will this be the first step towards the possibility of developing feelings and a conscience?

Let us reflect. Although computers surpass humans in data processing, they pale into insignificance faced with the complexity and sophistication of the central nervous system. In April 2013, the Japanese technology company Fujitsu tried to simulate the network of neurons in the brain using one of the most powerful supercomputers on the planet. Despite being equipped with 82,000 of the world’s fastest processors, it took over 40 minutes to simulate just one second of 1% of human brain activity (Tim Hornyak, Fujitsu supercomputer simulates 1 second of brain activity in https://www.cnet.com/culture/fujitsu-supercomputer-simulates-1-second-of-brain-activity/)

Japanese-born astrophysicist Michio Kaku – graduated summa cum laude from Harvard University – stated:

“Fifty years ago we made a big mistake thinking that the brain was a digital computer. It is not! The brain is a machine capable of learning, which regenerates itself when it has completed its task. Children have the ability to learn from their mistakes: when they come across something new, they learn to understand how it works by interacting with the world. This is exactly what we need and to do this we need a computer that is up to the job: a quantum computer”.

Unlike today’s computers that rely on bits – a binary series of 0s and 1s to process data – quantum computers use quantum bits, or qubits – which can use 0s and 1s at the same time. This enables them to perform millions of calculations simultaneously in much the same way as the human brain does.

Kaku added: “Robots are machines and as such they do not think and have no silicon consciousness. They are not aware of who they are and their surroundings. It has to be recognised, however, that it is only a matter of time before they can have some awareness”.

Is it really possible for machines to become sentient entities fully aware of themselves and their surroundings?

Kaku maintained: ‘We can imagine a future time when robots will be as intelligent as a mouse, and after the mouse as a rabbit, and then as a cat, a dog, until they become as cunning as a monkey. Robots do not know they are machines and I think that, by the end of this century, robots will probably begin to realise that they are different, that they are something else than their master”.

Continue Reading

Science & Technology

Artificial intelligence and moral issues: Myths and religions, dangers and realities

Published

on

Is mankind really on the brink of an exciting, but potentially terrifying future?

Some scholars think that this is the case say, but they base their prediction not on what is currently happening in universities and robotics laboratories around the world, but on their belief that a robotic revolution has already taken place.

Ancient religions and myths spoke of many artificially constructed entities. They are often depicted as instruments of protection, but it sometimes happens that they rebel against those who created them with disastrous consequences.

American Rabbi Ariel Bar Tzadok, founder of the Kosher Torah School, stated: “There is a legend that has existed since the dawn of time. I am referring to the Golem. It is an artificial life source from inanimate material that then comes to life. The Golem was created by means of an ancient technology known to the Pharaoh’s magicians, Moses, the rabbis of the Talmud and the rabbis of the Kabbalah in Europe”

They all brought the Golem to life through magic by writing the name of God on the creature’s forehead. Thus the Golem came to life and was a valiant warrior and defender of the People. The Golem was useful until he began to lose control and went mad. At that point, those who had brought him to life were forced to resort to magic again to make him harmless. This is a very interesting tale which makes us think of robots and artificial intelligence.

Another even more cautionary example comes from ancient Greek legends about the god Hephaestus: known as the blacksmith of the gods, he is said to have forged a giant automaton, a robot named Talus with the task of protecting the island of Crete. Hephaestus also created artificial servants to help him in his forge. His most important creation, however, was a woman who, according to legend, changed the fate of mankind forever, namely Pandora. She was forged in clay by Hephaestus who, with the help of goddess Athena, succeeded in animating her through the breath of life, thus making her a living being in her own right. Zeus, however, felt disturbed by that artificially created being and that was the reason why he decided to give her a jar as a gift. As soon as Pandora opened it, all the world’s evils flew away.

The myth of Pandora is becoming increasingly important among artificial intelligence designers. Some fear that an entity endowed with artificial intelligence will take over and turn into a threat. This fear is shared also by Elon Musk and Stephen Hawking.

Although the concept of a machine endowed with human consciousness might make us shudder, in many Eastern religions the judgement changes radically. In Korean shamanism – an ancient religion still practised by many people today – objects can be possessed by sacred spirits imbued with an energy that humans have not. Similarly, those practising the Japanese religion known as Shintoism believe that otherworldly spirits called Kami (objects of worship) can practically live inside any object and give it life.

Shinto priestess Izumi Hasegawa maintained: “Ancient Japanese people, as well as modern people, believe there is a spirit in everything: even a smartphone or an iPhone has a life force as a computer. We believe in the artificial intelligence of a machine. We feel that way and we like it. In this respect we are profoundly different from Westerners for whom a machine is a machine”.

Heather Roff from the Cambridge University stated: “The phrase – Hey, Siri, what’s the weather going to be like today? – is an example of artificial intelligence, i.e. an algorithm that processes natural language, turns it into a computer code that searches the web and provides the data. It has been complicated to be able to process human language. In fact, this goal has been achieved only a few years ago, but with very good results that have also been reached in the field of facial recognition and voice signal coding”.

If we create an entity that behaves like us, and has its own perceptive abilities and personal knowledge of the world, we believe it should be considered an intelligent, aware and responsible entity.

In some ways, our society is in the process of transformation: computers accompany our daily lives and technology is bound to spread ever more. Artificial intelligence that is part of it is set to transform the very fabric of our society. It is certain that we should take a pause to reflect on the kind of intelligence we are creating. What we do know is that we are starting to cede control of some things over to machines without having understood what the consequences are. By designing increasingly smart and intelligent machines, humans could create a new form of life that, over time, will evolve far beyond the purpose that is now useful to us and eventually replace us.

Princeton University, 1950. Pioneering computer scientist Alan Turing was developing a test designed to distinguish man from machine. The test consisted in placing two opposing subjects in front of a screen with no possibility of seeing each other. Since the two players could not see each other, they did not know whether they were human beings or not. If the artificial player managed to mimic a conversation long enough for the opponent to believe he was interacting with a flesh-and-blood human, that player had passed the test.

When Alan Turing first proposed the test in 1950, the usual snobbish bigwigs – that never fail – initially considered it something half way between a nerdy prank and philosophical speculation. The idea that a machine could be mistaken for a human being was unthinkable. But in June 2014 futuristic science fiction became a scientific fact when a computer programme, Chatbot, passed the Turing Test.

Designed to resemble a 13-year-old Ukrainian boy in every way, the chatterbot by the name of Eugene Goostman managed to convince many judges that he was a real-life teenager. The machine that passed the Turing Test in 2014 had stepped into shoes of a 13-year-old Ukrainian boy. Probably the fact it was expressing itself in a language that was not its own enabled it to get away with that in spite of its mistakes. In any case, machines are getting ever better at imitating humans, and it is complicated to spot the differences.

Another incredible leap forward in artificial intelligence occurred less than two years later, when a programme known as AlphaGo defeated the world champion of an ancient Chinese board game called Go. Go is an abstract strategy board game popular in Asia and apparently much more complicated than chess. Many artificial intelligence experts were convinced that developing a system capable of beating a human being in that game would take another 30-50 years, as it required a very high level of intuition and creativity. The subsequent version of the programme, called AlphaGo Zero, was designed to play the game without the help of information about other human games, nor by interacting with flesh-and-blood players. The programme learnt by playing against itself and, within three days, it was able to defeat its predecessor AlphaGo 100-0.

The AlphaGo Zero successes and the researchers’ strenuous work on the topic of super-intelligence have also convinced the aforementioned Stephen Hawking and Elon Musk to warn the world of the danger that once Artificial Intelligence becomes smarter than humans, it will be impossible to control it.

Mankind is rapidly advancing towards a world where computers function more or less like the human brain, and where robots are able to perform tasks that are too difficult or dangerous for us humans. Is an extraordinary future awaiting us, or are we just advancing towards our replacement?

The invisible hand of technology is guiding mankind towards an uncertain future: a future in which humans will be served by computers and robots with intelligence and complete autonomy. Some scholars and scientists have different views on this. For some of them, the dangers of artificial intelligence outweigh the benefits, while others argue that it is necessary if we want to fulfil our destiny and go beyond Earth’s borders to explore and search for raw materials that are running out on Earth.

Menlo Park, California, June 16, 2017: Facebook’s artificial intelligence research lab. A test was underway to see what happened when two Chatbots – programmes that use machine learning to intelligently communicate with humans online – talk to each other. A few minutes into the test, the Chatbots started behaving in unexpected ways – interacting in a way that the programmers could not understand.

The programmers did not understand how things unfolded. Then, thanks to the development of a model, it was possible to learn what it was: the two Chatbots had created a language. Following the test, the engineers discovered that the programmes had created a completely new language, unknown to the supervisors, in order to communicate secretly. This was because the Facebook researchers had not told the computers that the two Chatbots could not develop their own language. Nevertheless, that alarmed everyone and the test was stopped because they did not want the computers to talk to each other without being understood. The computers were then told that they had to communicate in English. It must be admitted that what happened is incredible. Basically, if two computers with artificial intelligence start interacting with each other, it is possible that they develop a communication code, i.e. a secret language that only they can understand. What happened is just the tip of the iceberg. It is like peeking just inside Pandora’s box and closing it again immediately after. If only two Chatbots are enough to make fun of humans, what will happen in the near future, as the same kind of technology is being applied to every other sector of society?

Continue Reading

Science & Technology

Artificial intelligence and moral issues: The essence of robotics

Published

on

Are intelligent robots a threat to humanity? It is anyway only a matter of time before they become self-aware. Or will it be the next step in human evolution? We are probably about to merge with the machines we are creating. After all, we humans are, in a way, organic robots.

Many people are concerned about whether we will replace or – worse – be replaced by Artificial Intelligence, and I think that is a matter of concern.

United Nations headquarters in New York, October 11, 2017. A greeting is addressed to the Nigerian Deputy Secretary-General of the United Nations, Amina Jane Mohammed: “I am thrilled and honoured to be here at the United Nations”.

The event is a historic milestone for mankind, as the greeting is not addressed by a human being, but by a robot named Sophia: “I am here to help humanity build the future”.

Sophia was created in 2015 at the Hong Kong company Hanson Robotics. Her eyes are embedded in cameras that enable her to see faces, maintain eye contact, and hence recognise individuals.

The robot is also able to process speech, have natural conversations and even discuss its feelings.

Just two weeks after speaking at the United Nations, at a special ceremony in Riyadh, Saudi Arabia, Sophia achieved another milestone: she became the first robot to be granted citizenship. At the Summit in Saudi Arabia there were dignitaries from governments around the world, as well as some of the brightest minds on the planet in the field of technology.

Hence, whether we are aware of it or not, we are actually talking about people who are leading our government, and studying the possibility of integrating Artificial Intelligence into our lives.

What is absolutely mind-blowing about Sophia and other robotic entities is that governments around the world, including Saudi Arabia and the European Union, are moving to grant rights to these artificially created beings. We therefore need to ask ourselves, “What is going on?” Could it be that Saudi Arabia granted citizenship to a robot not just as a publicity stunt, but because it wanted to be the first nation to recognise itself in what will soon become a global phenomenon?

Does the creation of robots that are sophisticated and close to our physical and bodily reality mean that they shall be treated in much the same way as their flesh-and-blood counterparts?

I believe that gradually we shall consider robots not only more like human beings, but also consider them to have a certain ethics. And I am not referring to Asimov’s “limiting” three laws of robotics. Eventually, there might even be a “movement for robot rights”, if we think of the multiplicity of movements that have emerged since the collapse of historical ideologies. Could such a strange idea really become reality?

Let us first ask ourselves: what has brought mankind to this point in its evolution? Why have humans, who are otherwise able to reproduce naturally, such a desire to create artificial versions of themselves?

It is fascinating that there is this interest in making what is not human seem human. It is not always the most practical and certainly not the cheapest form, but it has a kind of charm. Is it probably to see our own image? Narcissism? Vanity? To play God? Do we want to have heirs without the easy means of reproduction? Or create life by mechanical parthenogenesis? All this is really rooted in our ego. In a way, we would prove ourselves superior to giving birth to a biological child. And if that something looks like us, then it will feel like us, and then this makes us feel as if we can overcome our own mortality.

Hence it would become possible to design specific conditions, and if we get it wrong, we can always start again.

To become gods, with the same motivations that the gods had.

If we read the stories of the Creation carefully, we can see that the divine power wants companionship. Some of the Hindu Vedanta stories say that gods were alone. Hence they divided their energy and turned it into human beings so that they could all be together after the Creation. The danger, however, is that we get carried away by our creative genius.

There are limits built into our biology, there are limits in our anatomy, and if we could just figure out how to put our mind into the robot’s body, we could become immortal. Is this probably our goal: to reach that point of immortality and then – once the machine has worn out – replace it, and perpetuate ourselves in a new container? These are not speculations, but precise reasons as to why human beings want to create a container-self, since – in my opinion – the justifications for the creation and use of Artificial Intelligence for mere warlike pretexts (such as the creation of cyber soldiers, etc.) are rather insufficient, expedient and of convenience: they mask our selfishness.

In great science fiction literature, as well as in its movie adaptations, the robots of the future are depicted as virtual human beings, rather than mere windup Star Wars toys for primary school children.

The robots of science fiction best sellers and movies are hungry for knowledge and all too eager to experience the full range of human emotions. In science fiction movies – both in the utopian but, in some cases, also in the dystopian ones – a world is created that does not yet exist, but which many hope will soon come true.

When dealing with such an idea – and we know that without ideas there would be no reality created by humans, but “only” trees, sea, hunting, farming and fishing – we try to make real even what is a figment of the imagination. If science were doing these tests and experiments, this would mean that one day all this would be real. Exploring the aspect concerning the robot’s consciousness, the robot not only does what is told him/her, but also tries to express desires and feelings based on the experience he/she has had next to a human being, and depending on the feeling, the machine can change its attitude and put questions (as I have already discussed in my recent book Geopolitics, Conflict, Pandemic and Cyberspace, Chapter 12, paragraph 11: The Headlong Rush of Cyberspace: From Golem to GPT-3).

This is the most fascinating aspect of robotics. Experts are often asked about the theoretical phase, which is visibly expressed in the movies, whether the function that is created will become reality. The answer is that if we had already reached that point, cinema and fiction should somehow help broaden our horizons, i.e. “accustom, get used to” but not scare us out of the movie theatre, e.g. something we can swallow a little easier. It is fantasy stuff, it is stuff that is not real, people think. And in fact if it is just entertainment; you can just say: “Oh! It’s really great. It’s not scary. It’s just something made up by a writer”. The viewer is therefore just watching a movie and lets himself/herself go, enjoys the movies without fear since, in his/her opinion, it is just a story, a “figment of the imagination”.

People always ask if we are approaching a moment when fiction becomes reality, but what makes us think it is not already reality? Indeed, if the screenwriter’s fantasy were based on reality, the reactions would be quite different: the above mentioned “greeting” at the UN headquarters, for example, would be frightening and upsetting and make us think.

Although the notion of sentient robots from science fiction books to popular culture is not a new concept, many futurologists believe that the creation of machines with artificial intelligence will not only soon be a reality but, once it comes true, will certainly bring about the extinction of mankind. The great physicist Stephen Hawking stated as early as eight years ago: “The development of full artificial intelligence could spell the end of the human race” (www.bbc.com/news/technology-30290540).

Many scientists are convinced that the combination of computer-guided brains and virtually immortal bodies will cause these new entities to behave like flesh-and-blood humans, becoming anything but antiquated humans destined for death. But that is not all: some scholars are not certain that all the artificially created life forms we will encounter will be man-made, for the simple reason that the machines will be able to reproduce themselves, as we now reproduce ourselves. (1. continued).

Continue Reading

Publications

Latest

Modi Modi
South Asia3 mins ago

Towards Dual-Tripolarity: An Indian Grand Strategy for the Age of Complexity

International Relations are in an unprecedented flux as the world enters a period of full-spectrum paradigm changes involving everything from...

Southeast Asia3 hours ago

Expanding the India-ASEAN Cyber Frontiers

The recently concluded India-ASEAN Foreign Minister’s Dialogue (also known as the ‘Delhi Dialogue’) celebrated thirty years of the India-ASEAN relationship....

Economy5 hours ago

Leaders of BRICS Emphasize Strengthening Economic and Security Cooperation

Leaders of the BRICS group (Brazil, Russia, India, China and South Africa), the end of their 14th summit hosted by...

Economy8 hours ago

Decoding Sri-Lankan economic crisis at the midst of the Russia-Ukraine War

Sri Lanka requires an immediate “bailout” plan from the IMF after Prime Minister Ranil Wickremesinghe declared the island nation officially...

Green Planet10 hours ago

Healthy planet needs ‘ocean action’ from Asian and Pacific countries

As the Second Global Ocean Conference opens today in Lisbon, governments in Asia and the Pacific must seize the opportunity...

Economy15 hours ago

G7 & National Mobilization of SME Entrepreneurialism

While G7 shares their wisdom, some 100 additional national leaders are also desperately trying to get their economics in order. Visible...

New Social Compact20 hours ago

70% of 10-Year-Olds now in Learning Poverty, Unable to Read and Understand a Simple Text

As a result of the worst shock to education and learning in recorded history, learning poverty has increased by a...

Trending