Connect with us

Science & Technology

The Ethical and Legal Issues of Artificial Intelligence

Published

on

Ethics and law are inextricably linked in modern society, and many legal decisions arise from the interpretation of various ethical issues. Artificial intelligence adds a new dimension to these questions. Systems that use artificial intelligence technologies are becoming increasingly autonomous in terms of the complexity of the tasks they can perform, their potential impact on the world and the diminishing ability of humans to understand, predict and control their functioning. Most people underestimate the real level of automation of these systems, which have the ability to learn from their own experience and perform actions beyond the scope of those intended by their creators. This causes a number of ethical and legal difficulties that we will touch upon in this article.

Ethics and Artificial Intelligence

There is a well-known thought experiment in ethics called the trolley problem. The experiment raises a number of important ethical issues that are directly related to artificial intelligence. Imagine a runaway trolley going down the railway lines. There are five people tied to the track ahead. You are standing next to a lever. If you pull it, the trolley will switch to a different set of track. However, there is another person tied to that set of track. Do you pull the lever or not?

Source: Wikimedia.org

There is no clear-cut answer to this question. What is more, there are numerous situations in which such a decision may have to be made [1]. And different social groups tend to give different answers. For example, Buddhist monks are overwhelmingly willing to sacrifice the life of one person in order to save five, even if presented with a more complicated variation of the trolley problem.

As for artificial intelligence, such a situation could arise, for example, if a self-driving vehicle is travelling along a road in a situation where an accident is unavoidable. The question thus arises as to whose lives should take priority – those of the passengers, the pedestrians or neither. A special website has been created by the Massachusetts Institute of Technology that deals with this very issue: users can test out various scenarios out on themselves and decide which courses of action would be the most worthwhile.

Other questions also arise in this case: What actions can be allowed from the legal point of view? What should serve as a basis for such decisions? Who should ultimately be held responsible? This problem has already been addressed by companies and regulators. Representatives at Mercedes, for example, have said outright that their cars will prioritize the lives of passengers. The Federal Ministry of Transport and Digital Infrastructure of Germany responded to this immediately, anticipating future regulation by stating that making such a choice based on a set of criteria would be illegal, and that the car manufacturer be held responsible for any injury or loss of life.

Other countries may go a different route. Take the Chinese Social Credit System, for example, which rates its citizens based how law-abiding and how useful to society they are, etc. Those with low ratings will face sanctions. What is stopping the Chinese government from introducing a law that forces manufacturers of self-driving vehicles to sacrifice the lives of lower-rated citizens in the event of an unavoidable accident? Face recognition technologies and access to the relevant databases make it perfectly possible to identify potential victims and compare their social credit ratings.

The Main Problems Facing the Law

The legal problems run even deeper, especially in the case of robots. A system that learns from information it receives from the outside world can act in ways that its creators could not have predicted [2], and predictability is crucial to modern legal approaches. What is more, such systems can operate independently from their creators or operators thus complicating the task of determining responsibility. These characteristics pose problems related to predictability and the ability to act independently while at the same time not being held responsible [3].

There are numerous options in terms of regulation, including regulation that is based on existing norms and standards. For example, technologies that use artificial intelligence can be regulated as items subject to copyright or as property. Difficulties arise here, however, if we take into account the ability of such technologies to act autonomously, against the will of their creators, owners or proprietors. In this regard, it is possible to apply the rules that regulate a special kind of ownership, namely animals, since the latter are also capable of autonomous actions. In Russian Law, the general rules of ownership are applied to animals (Article 137 of the Civil Code of the Russian Federation); the issue of responsibility, therefore, comes under Article 1064 of the Civil Code of the Russian Federation: injury inflicted on the personality or property of an individual shall be subject to full compensation by the person who inflicted the damage.

Proposals on the application of the law on animals have been made [4], although they are somewhat limited. First, the application of legislation on the basis of analogy is unacceptable within the framework of criminal law. Second, these laws have been created primarily for household pets, which we can reasonably expect will not cause harm under normal circumstances. There have been calls in more developed legal systems to apply similar rules to those that regulate the keeping of wild animals, since the rules governing wild animals are more stringent [5]. The question arises here, however, of how to make a separation with regard to the specific features of artificial intelligence mentioned above. Moreover, stringent rules may actually slow down the introduction of artificial intelligence technologies due to the unexpected risks of liability for creators and inventors.

Another widespread suggestion is to apply similar norms to those that regulate the activities of legal entities [6]. Since a legal entity is an artificially constructed subject of the law [7], robots can be given similar status. The law can be sufficiently flexible and grant the rights to just about anybody. It can also restrict rights. For example, historically, slaves had virtually no rights and were effectively property. The opposite situation can also be observed, in which objects that do not demonstrate any explicit signs of the ability to do anything are vested with rights. Even today, there are examples of unusual objects that are recognized as legal entities, both in developed and developing countries. In 2017, a law was passed in New Zealand recognizing the status of the Whanganui River as a legal entity. The law states that the river is a legal entity and, as such, has all the rights, powers and obligations of a legal entity. The law thus transformed the river from a possession or property into a legal entity, which expanded the boundaries of what can be considered property and what cannot. In 2000, the Supreme Court of India recognized the main sacred text of the Sikhs, the Guru Granth Sahib, as a legal entity.

Even if we do not consider the most extreme cases and cite ordinary companies as an example, we can say that some legal systems make legal entities liable under civil and, in certain cases, criminal law [8]. Without determining whether a company (or state) can have free will or intent, or whether they can act deliberately or knowingly, they can be recognized as legally responsible for certain actions. In the same way, it is not necessary to ascribe intent or free will to robots to recognize them as responsible for their actions.

The analogy of legal entities, however, is problematic, as the concept of legal entity is necessary in order to carry out justice in a speedy and effective manner. But the actions of legal entities always go back to those of a single person or group of people, even if it is impossible to determine exactly who they are [9]. In other words, the legal responsibility of companies and similar entities is linked to the actions performed by their employees or representatives. What is more, legal entities are only deemed to be criminally liable if an individual performing the illegal action on behalf of the legal entity is determined [10]. The actions of artificial intelligence-based systems will not necessarily be traced back to the actions of an individual.

Finally, legal norms on the sources of increased danger can be applied to artificial intelligence-based systems. In accordance with Paragraph 1 of Article 1079 of the Civil Code of the A Russian Federation, legal entities and individuals whose activities are associated with increased danger for the surrounding population (the use of transport vehicles, mechanisms, etc.) shall be obliged to redress the injury inflicted by the source of increased danger, unless they prove that injury has been inflicted as a result of force majeure circumstances or at the intent of the injured person. The problem is identifying which artificial intelligence systems can be deemed sources of increased danger. The issue is similar to the one mentioned above regarding domestic and wild animals.

National and International Regulation

Many countries are actively creating the legal conditions for the development of technologies that use artificial intelligence. For example, the “Intelligent Robot Development and Dissemination Promotion Law” has been in place in South Korea since 2008. The law is aimed at improving the quality of life and developing the economy through the creation and promotion of a strategy for the sustainable development of the smart robot industry. Every five years, the government works out a basic plan to ensure that these goals are achieved.

I would like to pay particular attention here to two recent examples: France, which has declared its ambitions to become a European and world leader in artificial intelligence; and the European Union, which has put forward advanced rules for the regulation of smart robots.

France

In late March 2018, President of France Emmanuel Macron presented the country’s new national artificial intelligence strategy, which involves investing 1.5 billion Euros over the next five years to support research an innovation in the field. The strategy is based on the recommendations made in the report prepared under the supervision of French mathematician and National Assembly deputy Cédric Villani. The decision was made for the strategy to be aimed at four specific sectors: healthcare; transport; the environment and environmental protection; and security. The reasoning behind this is to focus potential of the comparative advantages and competencies in artificial intelligence on sectors where companies can play a key role at the global level, and because these technologies are important for the public interest, etc.

Seven key proposals are given, one of which is of particular interest for the purposes of this article – namely, to make artificial intelligence more open. It is true that the algorithms used in artificial intelligence are discrete and, in most cases, trade secrets. However, algorithms can be biased, for example, in the process of self-learning, they can absorb and adopt the stereotypes that exist in society or which are transferred to them by developers and make decisions based on them. There is already legal precedent for this. A defendant in the United States received a lengthy prison sentence on the basis of information obtained from an algorithm predicting the likelihood of repeat offences being committed. The defendant’s appeal against the use of an algorithm in the sentencing process was rejected because the criteria used to evaluate the possibility of repeat offences were a trade secret and therefore not presented. The French strategy proposes developing transparent algorithms that can be tested and verified, determining the ethical responsibility of those working in artificial intelligence, creating an ethics advisory committee, etc.

European Union

The creation of the resolution on the Civil Law Rules on Robotics marked the first step towards the regulation of artificial intelligence in the European Union. A working group on legal questions related to the development of robotics and artificial intelligence in the European Union was established back in 2015. The resolution is not a binding document, but it does give a number of recommendations to the European Commission on possible actions in the area of artificial intelligence, not only with regard to civil law, but also to the ethical aspects of robotics.

The resolution defines a “smart robot” as “one which has autonomy through the use of sensors and/or interconnectivity with the environment, which has at least a minor physical support, which adapts its behaviour and actions to the environment and which cannot be defined as having ‘life’ in the biological sense.” The proposal is made to “introduce a system for registering advanced robots that would be managed by an EU Agency for Robotics and Artificial Intelligence.” As regards liability for damage caused by robots, two options are suggested: “either strict liability (no fault required) or on a risk-management approach (liability of a person who was able to minimise the risks).” Liability, according to the resolution, “should be proportionate to the actual level of instructions given to the robot and to its degree of autonomy. Rules on liability could be complemented by a compulsory insurance scheme for robot users, and a compensation fund to pay out compensation in case no insurance policy covered the risk.”

The resolution proposes two codes of conduct for dealing with ethical issues: a Code of Ethical Conduct for Robotics Engineers and a Code for Research Ethics Committees. The first code proposes four ethical principles in robotics engineering: 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).

The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore, our attitude to autonomous systems (whether they are robots or something else), and our reinterpretation of their role in society and their place among us, can have a transformational effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.

Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems [11]. According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned above. Perhaps making programmers or users of autonomous systems liable for the actions of those systems would be more effective. But this could slow down innovation. This is why we need to continue to search for the perfect balance.

In order to find this balance, we need to address a number of issues. For example: What goals are we pursuing in the development of artificial intelligence? And how effective will it be? The answers to these questions will help us to prevent situations like the one that appeared in Russia in the 17th century, when an animal (specifically goats) was exiled to Siberia for its actions [12].

First published at our partner RIAC

  1. 1. See, for example. D. Edmonds, Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong, Princeton University Press, 2013.
  2. 2. Asaro P., “From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby,” in Wheeler M., Husbands P., and Holland O. (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press: pp. 149–184
  3. 3. Asaro P. The Liability Problem for Autonomous Artificial Agents // AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA. March 21–23, 2016, p. 191.
  4. 4. Arkhipov, V., Naumov, V. On Certain Issues Regarding the Theoretical Grounds for Developing Legislation on Robotics: Aspects of Will and Legal Personality // Zakon. 2017, No. 5, p. 167.
  5. 5. Asaro P. The Liability Problem for Autonomous Artificial Agents, p. 193.
  6. 6. Arkhipov, V., Naumov, V. Op. cit., p. 164.
  7. 7. See, for example. Winkler A. We the Corporations: How American Businesses Won Their Civil Rights. Liverlight, 2018. See a description here: https://www.nytimes.com/2018/03/05/books/review/adam-winkler-we-the-corporations.html
  8. 8. In countries that use the Anglo-Saxon legal system, the European Union and some Middle Eastern countries. This kind of liability also exists in certain former Soviet countries: Georgia, Kazakhstan, Moldova and Ukraine. It does not exist in Russia, although it is under discussion.
  9. 9. Brożek B., Jakubiec M. On the Legal Responsibility of Autonomous Machines // Artificial Intelligence Law. 2017, No. 25(3), pp. 293–304.
  10. 10. Khanna V.S. Corporate Criminal Liability: What Purpose Does It Serve? // Harvard Law Review. 1996, No. 109, pp. 1477–1534.
  11. 11. Hage J. Theoretical Foundations for the Responsibility of Autonomous Agents // Artificial Intelligence Law. 2017, No. 25(3), pp. 255–271.
  12. 12. U. Pagallo, The Laws of Robots. Crimes, Contracts, and Torts. Springer, 2013, p. 36.

Continue Reading
Comments

Science & Technology

Organisations that embed cybersecurity into their business strategy outperform their peers

MD Staff

Published

on

Organisations that take a business-driven cybersecurity approach to their digital initiatives achieve better outcomes and outperform their peers, according to PwC’s May 2019 Digital Trust Insights Survey.

The global survey of more than 3,000 executives and IT professionals worldwide found that the top 25% of all respondents – market leaders known as “trailblazers” – are not only leading the way on cybersecurity but also delivering more value and better business outcomes.

Among respondents who say growing revenue is the top value sought from digital transformation efforts, nearly nine in 10 trailblazers say they are getting a payoff that meets or exceeds their expectations (compared to 66% of the other respondents).

Trailblazers are also significantly more optimistic about the potential growth in revenue and profit margin for their companies, with 57% percent expecting revenue to grow by 5% or more, and 53% expecting profit margin to grow by 5% or more.

The survey revealed key demographic information about trailblazers. Many are large companies; 38% of respondents from companies worth at least US$1 billion are trailblazers. The financial services (FS) industry and the technology, media, and telecommunications (TMT) sector are particularly well represented in the leader group. Thirty-three percent of FS respondents and 30% of TMT respondents are trailblazers, compared to roughly a quarter of the survey base in other industries.

Geographically, just 21% of EMEA (Europe, the Middle East and Africa) respondents are trailblazers, compared to 30% in the Americas, and 30% in Asia Pacific.

The leading behaviours that set trailblazers apart from their corporate peers include aligning their  business and cybersecurity strategies, taking a risk-based approach, and coordinating their teams that manage risk. Key findings from PwC’s Digital Trust Insights survey illustrate the edge that trailblazers maintain in all three areas:

Connected on strategy: 65% of trailblazers strongly agree their cybersecurity team is embedded in the business, conversant in the organisation’s business strategy and has a cybersecurity strategy that supports business imperatives (vs. 15% of others)
 

Connected on a risk-based approach: 89% of trailblazers say their cybersecurity teams are consistently involved in managing the risks inherent in the organisation’s business transformation or digital initiatives (vs. 41% of others)

Coordinated in execution: 77% percent of trailblazers strongly agree their cybersecurity team has sufficient interaction with senior leaders to develop an understanding of the company’s risk appetite around core business practices (vs. 22% of others)

“By focusing on building digital trust, trailblazers are driving more proactive, pre-emptive and responsive actions to embed these strategies into the business, as opposed to their peers who primarily look to minimise the operational impacts of cyber threats in reactive manner,” comments TR Kane, PwC US Strategy, Transformation & Risk Leader.

More than eight in 10 trailblazers say they have anticipated a new cyber risk to digital initiatives and managed it before it affected their partners or customers (compared to six in 10 of others).

“Organisations that take a proactive approach to cybersecurity and embed it into every corporate action will be best placed to deliver the advantages of digital transformation, manage related risks and build trust,” adds Grant Waterfall, EMEA Cybersecurity and Privacy Leader, PwC UK.

“Our research highlights the need for organisations to embed their cybersecurity teams within the business to support strategic goals. It’s not just about protecting assets – it’s about being a strategic partner in the organisation,” adds Paul O’Rourke, Asia Pacific Cybersecurity and Privacy Leader, PwC Australia.

Continue Reading

Science & Technology

Business in Need of Cyber Rules

Anastasia Tolstukhina

Published

on

For more than 20 years, countries have been struggling to introduce a set of rules of conduct and liability requirements for digital space users. Progress in designing a code of cyber conduct is all the more relevant since digitalization is sweeping the planet at breakneck speed, creating new risks along with new opportunities. Businesses that are confronted with new challenges and threats in the digital space are putting forward their own initiatives, thereby pressing governments to speed up the process of adopting an international cyber code.

Why is the business community interested in setting rules in the cyber environment? There are many reasons for this.

Firstly, the quantity and quality of hacker attacks on the private sector increase every year. Hackers target any enterprises — whether they are small enterprises or technological giants. Attacked by the NotPetya virus, the world largest container carrier Maersk sustained $300 million damage and had to shell out nearly $1 billion for restoration. In total, according to Sberbank’s estimates, the damage to the global economy from hacker attacks in 2019 can reach about $2.5 trillion, and by 2022 — as much as $8–10 trillion.

Secondly, many technology-oriented companies, facing a lack of trust on the part of government agencies, experience severe difficulties in promoting their business projects abroad. At present, the UK, Norway, Poland, and other countries are involved in a debate about whether Huawei should be allowed to build fifth-generation mobile communication networks (5G). Huawei is suspected of stealing intellectual property and espionage. The US, Australia, New Zealand have introduced a ban on the use of 5G equipment from Huawei.

Not only Chinese companies face distrust. Google, Apple, Microsoft, Kaspersky Lab, and many others are often accused of illegally spying on people.

Thirdly, IT companies are forced to pay huge sums to protect their customers against hacker attacks and guarantee information security. Microsoft allocates more than $1 billion for this purpose yearly.

In the absence of a political solution to ensure international information security, private companies, which are keen to safeguard themselves and their customers, have chosen to conduct negotiations with each other on information security cooperation and are launching their own initiatives. Thus, coming into existence is a business information security track running parallel to the government.

In February 2017, Microsoft’s President Brad Smith launched the Digital Geneva Convention initiative. The Convention is expected to oblige governments not to take cyber attacks on private sector companies or the critical infrastructure of other states, and not to use hacker attacks to steal intellectual property.

Overall, the document formulates six basic principles of international cybersecurity:

  1. No targeting of tech companies, private sector, or critical infrastructure.
  2. Assist private sector efforts to detect, contain, respond to, and recover from events.
  3. Report vulnerabilities to vendors rather than to stockpile, sell, or exploit them.
  4. Exercise restraint in developing cyber weapons and ensure that any developed are limited, precise, and not reusable.
  5. Commit to non-proliferation activities to cyber weapons.
  6. Limit offensive operation to avoid a mass event.

However, while the Digital Geneva Convention is still on paper, 34 technology companies, including Microsoft, without waiting for decisions at the government level, signed the Cybersecurity Tech Accord in April 2018. Thus, the largest ever group of companies have become committed to protecting customers around the world from cybercriminals.

Cybersecurity Tech Accord members have called for a ban on any agreements on non-disclosure of vulnerabilities between governments and contractors, brokers, or cybersecurity experts; they also call for more funding for vulnerability detection and research.

Besides, signatories of the agreement have come up with a series of recommendations to strengthen confidence-building measures, which are based on the proposals of the UN and OSCE.

Such measures include:

-Develop shared positions and interpretations of key cybersecurity issues and concepts, which will facilitate productive dialogue and enhance mutual understanding of cyberspace and its characteristics.

-Encourage governments to develop and engage in dialogue around cyber warfare doctrines.

-Develop a list of facilities that are off-limits for cyber-attacks, such as nuclear power plants, air traffic control systems, banking sectors, and so forth.

-Establish mechanisms and channels of communication to respond to requests for assistance by another state whose critical infrastructure is subject to malicious ICT acts (organizing, i.e. tabletop exercises).

By now, Cybersecurity Tech Accord has been signed by 90 companies, including Microsoft, Facebook, Cisco, Panasonic, Dell, Hitachi, and others.

Another initiative was presented in 2018 by Siemens, which came up with the Charter of Trust. The Charter, which was signed by 16 companies, including IBM, AIRBUS, NXP, and Total, urges companies to set up strict rules and standards to foster trust in ICT and contribute to further development of digitalization.

Facebook has become part of the process too. In late March 2019, Mark Zuckerberg — the founder and CEO of Facebook — urged governments to become more actively involved in regulating the Internet. In particular, Zuckerberg spoke in favor of introducing new standards related to the Internet and social networks. These standards would come useful to guarantee the protection of personal data, prevent attempts to influence elections or disseminate unwanted information, and would assist in providing a solution to the problem of data portability.

Another initiative worth mentioning is the creation in 2014 of the Industrial Internet Consortium TM, IIC, which was founded on the initiative of AT & T, Cisco, GE, IBM, and Intel. This is a non-profit open-membership group that seeks to remove barriers between different technologies in order to maximize access to big data and promote the integration of physical and digital environment.

Some initiatives are coming from the Russian private sector. In particular, since 2017, Norilsk Nickel has been active on the international scene promoting the Information Security Charter of critical industrial facilities. The Charter’s main provisions include condemnation of the use of ICT for criminal, terrorist, military purposes; supporting efforts to create warning and detection systems, and assist in the aftermath of network attacks; and sharing best practices in information security.

In turn, Sberbank has launched an initiative to hold the world’s largest International Cybersecurity Congress. Last year, such a congress took place with the participation of 681 companies from 51 countries. The second such Congress is scheduled for this June. The Forum serves as an inter-sectoral platform that promotes global dialogue on the most pressing issues of ensuring information security in the context of globalization and digitalization.

Most business initiatives hinge on the fact that they all call for developing confidence-building measures and rules of conduct in the digital space. Besides, the business community welcomes the need to adjust international law to the new realities of the digital economy.

Private sector initiatives can perfectly be streamlined with initiatives put forward by countries within the framework of the UN. After all, by and large, governments pursue the same goals as business in this area. The use of ICT for peaceful purposes, confidence-building measures, the supply of information about vulnerabilities — all this is significant both for business and for most states.

Fortunately, the global discussion under the aegis of the UN on issues related to International Information Security is getting back on track after a pause of about one year. From now on, it will be attended by representatives of the private sector. According to the resolution (A/RES/73/27), the mandate of the future Open-Ended Working Group (OEWG) allows for the possibility of holding inter-session consultative meetings with representatives of businesses, non-governmental organizations and the scientific community to exchange opinions on issues within the group’s mandate. The first inter-sessional meeting with representatives of global business is scheduled for November 2019.

In conclusion, we would like to remark that the issue of information security is dynamic and for this reason, it can be adequately addressed only with the close cooperation of governments and technology companies, since it is the latter that keep pace with the development of technologies and are the drivers of the digital economy. Governments should keep a close eye on the initiatives of non-state actors and put the most useful proposals on the agenda of discussions at international forums. Moreover, once adopted and approved at the government level, these standards and regulations should have a legal force, rather than be recommendatory — this is the only way to guarantee the order in the cyber environment.

First published in our partner RIAC

Continue Reading

Science & Technology

Technology for Social Good in India

MD Staff

Published

on

From using drones to plan water supply schemes in hard-to-reach locations, to deploying satellite imagery for enhancing land usage, or using mobile phones to track children’s health, technology is changing the way we live. The World Bank is supporting several interventions where new-age technology is being used for social good, giving a new tool to policymakers to improve governance and the quality of our lives

Making farmers resilient

Digital applications are helping farmers in Bihar and Madhya Pradesh make faster and better decisions on crop planning based on weather conditions, soil and other indicators

This $12.67-million Sustainable Livelihoods and Adaptation to Climate Change project that started in 2015 has so far empowered more than 8,000 farmers to adopt climate resilient practices.

Prioritizing interventions

Satellite images taken from a height of 900 km in Karnataka capture crucial data like land use as well as land cover, groundwater prospects, and soil characteristics. When this data is fused with rainfall patterns and literacy rates, it helps experts and communities to prioritize action plans such as those for soil and water conservation

Geographic information system (GIS) technology can also map nutrient deficiencies in the soil, which helps with crop planning.

The Karnataka Watershed Development Project, known locally as Sujala, covered over half a million hectares of land in seven predominantly rain-fed districts in Karnataka between 2001 and 2009 and was the first to deploy the use of satellite remote sensing and GIS mapping effectively over a large area.

Supplying Water in Challenging Terrain

Shimla city in Himachal Pradesh gets water once every two days for a few hours, while bulk water is pumped over 1,400 meters, creating a high cost of service

To tackle this, drones have been used to click high resolution images in high altitudes and challenging topography in World Bank’s Shimla Water Supply and Sewerage Service Delivery Reform Project. This, along with GIS technologies, has helped the state government prepare a 24×7 water supply model for the city that addresses issues such as pressure management, transmission and distribution networks, and identifying illegal connections.

Tracking health

All across India approximately 150,000 Anganwadi workers are using smartphones to track growth and nutrition in children. Photos of the hot lunch served to the children at health and nutrition centers, for example, can now easily be shared with block, district and state-level officials.

“It’s easier to work with mobiles than registers,” confessed an Anganwadi worker in Madhya Pradesh.

The World Bank has so far invested about $306 million in nutrition through the ICDS Systems Strengthening and Nutrition Improvement Project.

In Chhattisgarh, a mobile based application called Nutri-Click provides real time, need-based, one-on-one counseling on appropriate nutrition and care practices to pregnant women and caregivers and mothers of young children and their family members.

The program has so far helped over 4000 pregnant and lactating women

Digitizing Medical Records

Doctors in 36 public hospitals in Tamil Nadu can now access, collect and analyze critical health data for quick and timely interventions with the click of a button. The system also helps with retrieval of manual records as well as maintenance and management of medical equipment, making the entire process transparent and convenient.

The $110.3 million Tamil Nadu Health Systems Project was active in five Tamil Nadu districts. A second phase will now aim to cover another 222 hospitals across the remaining 25 state districts.

e-Governance

In 164 municipalities in Karnataka, property owners are now able to calculate their property taxes online; 10 million birth and death records are now online and searchable; and over 390,000 citizen complaints were lodged over 10 months—98 percent of which were redressed.

Through the Karnataka Municipal Reforms Project, municipal revenues have increased while interface between citizens and local administrations has vastly improved.

Vocational Training

World Bank’s Vocational Training Improvement Project has helped digitize activities such as admissions, examination management, and certifications in Industrial Training Institutes (ITI) under the National Council of Vocational Training.

The portal provides detailed records from more than 13,000 public and private ITIs across the country, including data related to courses offered, admissions, examinations, placements, etc.

So far more than 150,000 e-certificates to past trainees have been issued, and over 2 million certified trainees have received online certificates, saving time and effort.

World Bank

Continue Reading

Latest

Trending

Copyright © 2019 Modern Diplomacy