AI’s Dark Side: Misinformation and Disinformation

Misinformation is an unintentional error that occurs while accumulating, analyzing, and disseminating information.

Philosophers and social scientists consider information – not matter and energy – the building block of human existence. it establishes human networks and allows mass cooperation. it equips people to identify reality, flourishes political systems, integrates social order, allows economic activities, and advances research and development for the continuous growth. Throughout history, the means to accumulate, analyze, and disseminate information evolved with the existing technologies. From the printing press to the age of AI, each advancement in technology has resulted in innovation and profound challenges in the information landscape. In the AI-driven digital age, the flow of information has become faster and more accessible. Alongside its immense potential, AI has amplified misinformation and disinformation that has overarching implications for human affairs. 

Misinformation is an unintentional error that occurs while accumulating, analyzing, and disseminating information. Disinformation, conversely, is a deliberate lie or error that someone knowingly does in search of a vested interest. AI-led information platforms such as Facebook and YouTube among others amplify misinformation and disinformation due to the latter’s sensational and violent nature helping them to attract ‘user engagement’ on their respective platforms. The World Economic Forum’s Global Risks Report 2024 identifies misinformation and disinformation as severe threats in the coming years, and what they result in is intense polarization, weakening appetite for tolerance, trust erosion, diversion in the state machinery, and the potential rise of domestic propaganda and censorship.

Empirically, in August 2017, The Rohingya crisis in Myanmar was highly believed to have been caused due to Facebook’s failure to moderate the hate speech and fake news and allow them to circulate for user engagement. That resulted in a mass exodus of 7 lakh Rohingya to Bangladesh. In June 2024 in Quetta, a school teacher attempted to suicide, but Facebook was used to falsely portrayed him as a hero injured while saving a child from a train. Seeing his apparently heroic role, the Chief Minister of Balochistan rewarded him with free healthcare and family employment. However, days later, it was unveiled that there was no child and he was attempting suicide. Resultantly, a man with a single Facebook post diverted the entire state machinery.

There are other platforms by the name of deepfakes that are used for face swapping and voice mimicry and are widely used during election campaigns to astray the voters. These tools fuel the populist layer, undermine the trust of the people in elections, and challenge the legitimacy of existing institutions. There are also prospects that authoritarian rulers use such tools to consolidate their reign. In the ongoing US presidential election campaigns, the deepfakes of Joe Biden have been circulated in which he urges the residents of New England not to vote and save their vote for the November election. It was a robot call used artificial intelligence to mimic his voice to dissuade Democrat voters from casting their votes in the primary. Such phone calls reached nearly 5,000 voters just before the state primary. Therefore, at least 20 US states have passed regulations against deed fakes before the elections. In this year, people from 70 countries are supposed to head to the polls this year, and it is expected that we are going to see the largest AI deception in the history of humanity thus far.

Hence, the application of AI in the information domain comes with remarkable opportunities and genuine risks. To better harness AI-led innovations, some proactive steps must be taken by the relevant stakeholders and authorities. First, a global legally binding instrument is essential that acknowledge the innovations and threats of AI. Second, it must oblige nation-states and non-state actors, especially tech giants to a swift collaboration for the regulation of AI. The instrument must include improving AI algorithms for content moderation, addressing algorithm biases, encouraging collaboration between tech companies and national governments, and highly regulating the use of deepfakes and other AI manipulations. Third, AI developers must initiate and implement robust safeguards including transparency measures and accountability frameworks. Finally, it is equally important that high regulations and censorships for controlling the flood of AI generated misinformation and disinformation must not compromise free speech and peoples’ voice.  

Shah Meer
Shah Meer
Shah Meer is an Assistant Research Fellow at Balochistan Think Tank Network.