Artificial Intelligence and Disinformation: Examining challenges and solutions

An array of experts has contributed deep insight into how artificial intelligence is impacting on the way information is being produced, disseminated and consumed, thus reshaping the communications landscape.

They joined a panel on 5 March, in the framework of Mobile Learning Week at UNESCO HQ in Paris, for a dedicated workshop on the subject.

Guy Berger, Director for Freedom of Expression and Media Development at UNESCO, opened the discussion by describing the problem of disinformation by drawing on definitions of the Council of Europe and the European Union.

This perspective sees disinformation as content that is deliberately and intentionally fabricated, not true nor verifiable, and which is produced with the intention of making a profit, and/or pushing a certain ideological or political agenda.

Through social media algorithms, micro-targeting and persuasion, the dissemination of ‘deep fakes’, AI-generated content and automated trolling, artificial intelligence evidently plays a crucial role in the rapid spread of disinformation.

However, said Berger, AI can also be part of the solution, as was illustrated by the multistakeholder panel that explored current problems and possible ways of facing them. Storyzy is a tech start-up created precisely to address the emerging challenges, by using AI to and classify online sources according to the probability of them spreading disinformation, as explained by Pierre-Albert Ruquier, its co-founder and Marketing Director.

Yet tackling the spread of disinformation by virtue of artificial intelligence poses important issues in the view of the panelists. Wafa Ben-Hassine, a Policy Counsel at the Access Now NGO, noted that while hate speech constituting incitement to violence should be banned according to international standards, disinformation is not always illegal. Yet, AI-based responses to this content often led to situations in which speech protected by international law ends up being taken down by a machine, she warned.

Divina Frau-Meigs, UNESCO Chair for “Savoir-Devenir in sustainable digital development: mastering information cultures”, highlighted how the spread of disinformation pose risks to democracy, particularly in relation to the integrity of elections.

Marc-Antoine Dilhac, Philosophy professor at the University of Montreal and Canada Research Chair in Public Ethics, echoed this concern, calling attention to how, by giving away their data, individuals are feeding algorithms that serve business interests and targeted political advertising.   

Elodie Vialle, Head of the Journalism and Technology Desk at Reporters Without Borders, explained how “Journalists, and mostly female journalists, are being increasingly harassed online, through disinformation campaigns amplified by bots”. While recognizing that AI can represent opportunities for journalists (e.g. by facilitating fact-checking), false information spreads six times faster than true information. In this situation, journalism needs to be reinforced and supported, she stressed.,

For his part, Cordel Green, Executive Director at the Broadcasting Commission of Jamaica, identified the current disruption in the media ecosystem as the source of many challenges, including for regulatory bodies. A very small group of tech companies control social networks and have become content creators and aggregators. Meanwhile, audiences are shifting online, traditional media are becoming unprofitable and see their capacities are diminished. “We are losing fact-checking gatekeepers”, he cautioned.

The audience raised points of their own: How to factor in content, in order for AI to be able to discern metaphors, irony, and jokes? How can journalists survive in the new digital age?  How can we tackle the effects of AI on human rights, and in terms of amplifying existing inequalities and biases?

Recognising that the spread of disinformation is a multifaceted problem, the panel offered several pragmatic solutions.

Ms Ben-Hassine insisted on holistically tackling the challenges posed by the social media platforms’ business model and the way in which they carry out data collection. She advocated the need for appropriate legal frameworks regulating competition and data protection, as well as transparency in electoral advertising.

Both Professor Dilhac and Mr Ruquier explained that the automated analysis of what constituted disinformation would need to be worked out in tandem with human moderators. AI would be best when it is used to find trolls and fraudulent accounts, then flagging them for human moderators to make the final decision, they said. Mr Green was of a similar view when stating that journalists should look at AI not as the enemy, but as a tool.

Panelists supported multi-stakeholder responses like the Journalism Trust Initiative referred to by Ms Vialle, which brings together media managers, editors, publishers and regulators. The best solution to protect journalists and freedom of expression against the threats of false information is self-regulation, she noted. Similarly, social media platforms should face up to their responsibilities while avoiding the privatization of censorship, she argued.

Speakers all agreed on the importance of further research, as well as of empowering users through Media and Information Literacy, with Ms Frau-Meigs highlighting relevant ongoing efforts, such as those promoted by the Global Alliance for Partnerships on MIL,  the Council of Europe and the High Level Expert Group set up by the European Commission to counter online disinformation, among others.

UNESCO