The session “Open Data and AI” organized within the framework of “Principles for AI: Towards a Humanistic Approach?” on 5 March 2019 requested UNESCO to continue leveraging its convening power to increase awareness around artificial intelligence and big data, support development of inclusive policy on Open Data and support upstream and downstream capacity enhancement.
The workshop noted Data as an essential element for the development of artificial intelligence. The availability of large amounts of user data through services on mobile phones and internet of things among other sources, has led to a variety of AI applications and services. However, there remain many challenges. These challenges encompass issues of access, privacy, discrimination and openness. Several of these challenges are within UNESCO’s mandate of building inclusive knowledge societies for peace and sustainable development.
Ms Dorothy Gordon, Chair of the Information for All Programme at UNESCO pointed out that “despite the fact that we have a huge interest from many donors, we do not seem to have done very much systematically to prepare African countries to have useful data … [and] in a searchable format that can be combined with other sources to … yield something [beneficial]”. She stressed the need to bridge gaps in terms of the availability of legacy data, setting policy standards, and enhancing capabilities of people to work with local data sets.
Ms Constance Bommelaer, Senior Director of Global Internet Policy and International Organizations at The Internet Society underlined ‘data commons’ as an interesting solution to explore but one that needs a nuanced discussion around ownership and privacy. She highlighted the need to challenge existing notion of competition and a need for “reconsideration of market values and monopolies”. Stressing the importance of access, she shared the findings of a joint study carried out by ISOC and UNESCO that showed how a combination of local language content and better access policies results in immediate economic benefits at the local level.
As a government representative, Ms Veronika Bošković Pohar, Deputy Permanent Delegate of the Republic of Slovenia to UNESCO discussed ‘regulatory sandboxes’ as a means to provide controlled environment for AI. She hoped that Slovenia’s proposed Category 2 Centre on Artificial Intelligence would be able to make several informed decisions, provide insights into technology and societal interface and create mechanisms for continuous monitoring and reporting to reduce risks posed by AI to vulnerable groups.
Speaking as a panelist representing a knowledge organization, Prof. Maria Fasli, UNESCO Chair in Analytics and Big Data at University of Essex noted the lack of understanding on AI and Big Data and expressed concerns for the difficulty faced by the academic community in accessing data collected by large technology firms for research purpose. She further highlighted the need for high quality representative data to ensure that algorithms are not biased.
Given their experience in tracking innovation trends across the world. Mr Marcus Goddard, Vice President of Intelligence at Netexplo Observatory underlined that “access to data is a necessary but not sufficient condition for innovation. Pointing out the general trends in openness, he mentioned that openness is not Silicon Valley’s top priority and convenience seems to be the norm when it comes to launch of new products and services. He highlighted that even as data is being used in smart cities to improve access and sustainability, it is also increasing the threat of surveillance.
Mr Philippe Petitpont, Co-founder of Newsbridge, a Paris based AI and Media startup, presented the scale of the data problem that the media faces today. He remarked that media companies are gathering 30 million hours of video content every year, a number that does not include social media videos. In this situation, extracting useful insights from these videos is a cumbersome task albeit one that can be performed by AI. They try to leverage AI to help journalists process large amounts of data at lower costs.
The session brought the viewpoints of multiple stakeholders to the discussion table and some of the key concerns included were:
- Urgent need to increase awareness around artificial intelligence and big data;
- Developing strategies to strengthen access to data for training machine learning algorithms;
- Supporting both upstream and downstream capacity enhancement to leverage data for benefit;
- Involving private sector actors in the discussion around access to data and data monopolies; and
- Creating systems for addressing discrimination and biases originating through data and algorithms.
The panel members congratulated UNESCO for facilitating important discussions around issues of rights, openness, access and multistakeholder participation in the governance of data and hoped to engage with the organization for further development of issues around Open Data and AI.
US Blacklist of Chinese Surveillance Companies Creates Supply Chain Confusion
The United States Department of Commerce’s decision to blacklist 28 Chinese public safety organizations and commercial entities hit at some of China’s most dominant vendors within the security industry. Of the eight commercial entities added to the blacklist, six of them are some of China’s most successful digital forensics, facial recognition, and AI companies. However, the two surveillance manufacturers who made this blacklist could have a significant impact on the global market at large—Dahua and Hikvision.
Putting geopolitics aside, Dahua’s and Hikvision’s positions within the overall global digital surveillance market makes their blacklisting somewhat of a shock, with the immediate effects touching off significant questions among U.S. partners, end users, and supply chain partners.
Frost & Sullivan’s research finds that, currently, Hikvision and Dahua rank second and third in total global sales among the $20.48 billion global surveillance market but are fast-tracking to become the top two vendors among IP surveillance camera manufacturers. Their insurgent rise among IP surveillance camera providers came about due to both companies’ aggressive growth pipelines, significant product libraries of high-quality surveillance cameras and new imaging technologies, and low-cost pricing models that provide customers with higher levels of affordability.
This is also not the first time that these two vendors have found themselves in the crosshairs of the U.S. government. In 2018, the U.S. initiated a ban on the sale and use of Hikvision and Dahua camera equipment within government-owned facilities, including the Department of Defense, military bases, and government-owned buildings. However, the vague language of the ban made it difficult for end users to determine whether they were just banned from new purchases of Dahua or Hikvision cameras or if they needed to completely rip-and-replace existing equipment with another brand. Systems integrators, distributors, and even technology partners themselves remained unsure of how they should handle the ban’s implications, only serving to sow confusion among U.S. customers.
In addition to confusion over how end users in the government space were to proceed regarding their Hikvision and Dahua equipment came the realization that both companies held significant customer share among commercial companies throughout the U.S. market—so where was the ban’s line being drawn for these entities? Were they to comply or not? If so, how? Again, these questions have remained unanswered since 2018.
Hikvision and Dahua each have built a strong presence within the U.S. market, despite the 2018 ban. Both companies are seen as regular participants in industry tradeshows and events, and remain active among industry partners throughout the surveillance ecosystem. Both companies have also attempted to work with the U.S. government to alleviate security concerns and draw clearer guidelines for their sales and distribution partners throughout the country. They even established regional operations centers and headquarters in the country.
While blacklisting does send a clearer message to end users, integrators, and distributors—for sales and usage of these companies’ technologies—remedies for future actions still remain unclear. When it comes to legacy Hikvision and Dahua cameras, the onus appears to be on end users and integrators to decide whether rip-and-replace strategies are the best way to comply with government rulings or to just leave the solutions in place and hope for the best.
As far as broader global impacts of this action, these will remain to be seen. While the 2018 ban did bring about talks of similar bans in other regions, none of these bans ever materialized. Dahua and Hikvision maintained their strong market positioning, even achieving higher-than-average growth rates in the past year. Blacklisting does send a stronger message to global regulators though, so market participants outside the U.S. will just have to adopt a wait-and-see posture to see how, if at all, they may need to prepare their own surveillance equipment supply chains for changes to come.
After Google’s new set of community standards: What next?
After weeks of Google’s community standard guidelines made headlines, the Digital Industry Group Inc. (Australia based NGO) rejected proposals from the regulating body based in the southern hemisphere. The group claimed that regulating “fake news” would make the Australian Competition and Consumer Commission a moral police institution. In late August, Google itself forbade its employees from indulging in the dissemination of inadequate information or one that involved internal debates. From the outset, the picture is a bit confusing. After the events in Australia, Google’s latest act of disciplinary intrusion seems all but galvanizing from certain interests or interest groups.
A year earlier, Google was shaken by claims of protecting top-level executives from sexual crimes; the issue took a serious turn and almost deteriorated company operations. If anything but Google’s development from the horror of 2018 clearly suggests a desperate need from the hierarchy to curb actions that could potentially damage the interests of several stakeholders. There is no comprehensive evidence to suggest that Google had a view on how the regulations were proposed in Australia. After all, until proven otherwise, all whistleblowing social media posts and comments are at one point of time, “fake”. Although the global giant has decided to discontinue all forms of unjustifiable freedom inside its premises; however, it does profit by providing the platform for activism and all forms of censure. The Digital Industry Group wants the freedom to encourage digital creative contents, but Google’s need to publish a community guideline looks more of a defensive shield against uncertainties.
On its statement, the disciplinary clause, significantly mentions about the actions that will be taken against staffs providing information that goes around Google’s internal message boards. In 2017, female employees inside the Google office were subjected to discrimination based on the “gender-ness” of working positions. Kevin Kernekee, an ex-employee, who was fired in 2018, confirmed that staff bullying was at the core of such messaging platforms. Growing incidents inside Google and its recent community stance are but only fuelling assumptions about the ghost that is surrounding the internet giant’s reputation. Consequently, from the consumer’s point of view, an instable organization of such global stature is an alarm.
The dissidents at Google are not to be blamed entirely. As many would argue, the very foundation of the company was based on the values of expression at work. The nature of access stipulated into Google’s interface is another example of what it stands for, at least in the eyes of consumers. Stakeholders would not wish for an internal turmoil; it would be against the enormous amount of trust invested into the workings of the company. If google can backtrack from its core values upon higher forces, consumers cannot expect anything different. Google is not merely a search engine; for almost half of the internet users, it is almost everything.
“Be responsible, Be helpful, Be thoughtful”. These phrases are the opening remarks from the newly engineered community guideline. As it claims in the document, three principles govern the core values at Google. Upon closer inspection, it also sounds as if the values are only based on what it expects from the people working for the company. A global company that can resort to disciplining its staff via written texts can also trim the rights of its far-reaching consumer groups. It might only be the beginning but the tail is on fire.
How to Design Responsible Technology
Biased algorithms and noninclusive data sets are contributing to a growing ‘techlash’ around the world. Today, the World Economic Forum, the international organisation for public-private cooperation has released a new approach to help governments and businesses counter these growing societal risks.
The Responsible Use of Technology report provides a step-by-step framework for companies and governments to pin point where and how they can integrate ethics and human rights-based approaches into innovation. Key questions and actions guide organizations through each phase of a technology’s development process and highlight what can be done and when to help organizations mitigate unethical practices. Notably, the framework can be applied on technology in the ‘final’ use and application phase, empowering users to play an active role in advocating for policies, laws and regulations that address societal risks.
The guide was co-designed by industry leaders from civil society, international organizations and businesses including BSR, the Markkula Centre for Applied Ethics, the United Nation’s Office of the High Commissioner for Human Rights, Microsoft, Uber, Salesforce, IDEO, Deloitte, Omidyar Network and Workday. The team examined national technology strategies, international business programmes and ethical task forces from around the world, combining lessons learned with local expertise to develop a guide that would be inclusive across different cultures.
“Numerous government and large technology companies around the world have announced strategies for managing emerging technologies,” said Pablo Quintanilla, Fellow at the World Economic Forum, and Director in the Office of Innovation, Salesforce. “This project presents an opportunity for companies, national governments, civil society organizations, and consumers to teach and to learn from each other how to better build and deploy ethically-sound technology. Having an inclusive vision requires collaboration across all global stakeholders.”
“We need to apply ethics and human rights-based approaches to every phase in the lifecycle of technology – from design and development by technology companies through to the end use and application by companies across a range of industries,” said Hannah Darnton, Programme Manager, BSR. “Through this paper, we hope to advance the conversation of distributed responsibility and appropriate action across the whole value chain of actors.”
“Here, we can draw from lessons learned from companies’ efforts to implement ‘privacy and security by design,” said Sabrina Ross, Global Head of Marketplace Policy, Uber. “Operationalizing responsible design requires leveraging a shared framework and building it into the right parts of each company’s process, culture and commitments. At Uber, we’ve baked five principles into our product development process so that our marketplace design remains consistent with and accountable to these principles.”
This report is part of the World Economic Forum’s Responsible Development, Deployment and Use of Technology project. It is the first in a series tackling the topic of technology governance. It will help inform the key themes at the Forum’s Global Technology Governance Summit in San Francisco in April 2020. The project team will work across industries to produce a more detailed suite of implementation tools for organizations to help companies promote and train their own ‘ethical champions’. The steering committee now in place will codesign the next steps with the project team, building on the input already received from global stakeholders in Africa, Asia, Europe, North America and South America.
The Centre for the Fourth Industrial Revolution Network brings together more than 100 governments, businesses, start-ups, international organizations, members of civil society and world-renown experts to co-design and pilot innovative approaches to the policy and governance of technology. Teams in Colombia, China, India, Israel, Japan, UAE and US are creating human-centred and agile policies to be piloted by policy-makers and legislators, shaping the future of emerging technology in ways that maximize their benefits and minimize their risks. More than 40 projects are in progress across six areas: artificial intelligence, autonomous mobility, blockchain, data policy, drones and the internet of things.
The Network helped Rwanda write the world’s first agile aviation regulation for drones and is scaling this up throughout Africa and Asia. It also developed actionable governance toolkits for corporate executives on blockchain and artificial intelligence, co-designed the first-ever Industrial IoT (IIoT) Safety and Security Protocol and created a personal data policy framework with the UAE.
AMLO’s Failed State
Mexico’s challenges since transitioning from the hegemonic rule of the Institutional Revolutionary Party (PRI) 19 years ago have remained numerous...
New Target: Cut “Learning Poverty” by At Least Half by 2030
The World Bank introduced today an ambitious new Learning Target, which aims to cut by at least half the global...
African financial centres step up efforts on green and sustainable finance
When we talk about climate change and sustainable development, the continent that is often highlighted as facing the greatest socio-economic...
Modi’s India a flawed partner for post-Brexit Britain
With just two weeks to go until Britain is scheduled to exit the European Union, Boris Johnson and his ministers...
Post-UNGA: Kashmir is somewhere between abyss and fear
Hailed as a hero for calling out New Delhi’s draconian measures in occupied Kashmir, Imran Khan warned the world of a...
Achieving Broadband Access for All in Africa Comes With a $100 Billion Price Tag
Across Africa, where less than a third of the population has access to broadband connectivity, achieving universal, affordable, and good...
Best of the Net nominated essay: “Secrets”
So, mother, like Johannesburg, you cut me in deep, imaginative and raw ways. A cut from you was a project....
Urban Development3 days ago
Cities Around the World Want to Be Resilient and Sustainable. But What Does This Mean?
East Asia2 days ago
Semiconductor War between Japan and South Korea
Americas2 days ago
When Democracy Becomes the Problem: Why So Many Millions Still Support Donald Trump
Middle East3 days ago
Could Turkish aggression boost peace in Syria?
South Asia3 days ago
Kashmir Issue at the UNGA and the Nuclear Discourse
Africa2 days ago
The Impact of Xenophobic Attack on Nigerians
Southeast Asia2 days ago
China-Indonesia relations are expected to grow during Jokowi’s second term
Intelligence2 days ago
Strategy of Cyber Defense Structure in Political Theories