Restrictions on internet usage, the propagation of disinformation facilitated by artificial intelligence, and the escalation of mass arrests have intensified in anticipation of important elections across Asia. These attacks undermine the principles of freedom of expression precisely during a period when elections are meant to restore, fortify, or combat authoritarianism and promote democratic governance within the region.
The public discourse surrounding artificial intelligence encompasses two contrasting perspectives. On one hand, there are high expectations for the optimization of socially-relevant processes. On the other hand, there are numerous concerns such as loss of control, surveillance, dependency, and discrimination.
This dichotomy is also observed in the relationship between democracy and AI. In many instances, AI systems are primarily perceived as a threat to democracy. The main concern revolves around the potential manipulation of voters’ will, including the influence of adaptive social bots, individual voter manipulation and data misuse, as well as interference in elections by external intelligence services. However, there are also recurring aspirations for AI to contribute to social well-being.
Over 50 countries, including major democracies and politically sensitive regions like Taiwan, are expected to hold national elections in 2024. Seven of the world’s most populous nations, including Bangladesh, India, Indonesia, Mexico, Pakistan, Russia, and the United States, will have a significant impact on global politics as a third of the world’s population heads to the polls.
Elections tend to be emotionally charged and can fuel tribal dynamics, making misinformation particularly damaging. If social media misinformation is comparable to yelling “fire” in a crowded theater, election misinformation can be likened to doing so during a horror movie, when everyone is already on edge.
In September 2023, Bangladesh enacted the Cyber Security Act, which some critics argued was merely a superficial renaming of the Digital Security Act. The Digital Security Act has been viewed as a repressive law frequently employed to detain critics, journalists, and opposition members.
During August, Pakistan’s parliament was dissolved, but not before approving amendments to existing laws relating to blasphemy, national security, and data protection. These amendments granted authorities extensive powers for censorship.
In December last year, Indonesia made amendments to the Electronic Information and Transactions Law, which still retained provisions regarding criminal defamation. The revised law now criminalizes the dissemination of “false statements” that cause “public unrest.” This has drawn criticism from the International Commission of Jurists as being vague and overly broad.
These legislative measures have allowed ruling parties to exert greater control over news and information, leading to concerns of increased censorship. In Pakistan, social media platforms were inaccessible during a virtual rally and fundraising event held by the main opposition party in December and January, respectively. The Pakistan Press Foundation is worried that such recurring shutdowns could set a concerning precedent, particularly with the approaching general election in the country.
Limiting press freedom and blocking online platforms undermines the integrity of the electoral process. The consequences of restricting social media in Pakistan went beyond affecting the opposition; it also hindered the public’s access to accurate and trustworthy information. If this trend continues, it could deprive voters of the opportunity to learn about important election-related issues. Additionally, if Pakistani media face repercussions from the state for reporting on topics that are deemed to threaten national security and social harmony, it may result in the suppression of politically sensitive subjects, such as religious discrimination, gender inequality, the disenfranchisement of transgender voters, corruption, and military interference in politics.
Ever since the introduction of generative AI tools in recent times, the authenticity of online content has become a topic of extensive discussion. The emergence of Deepfakes, including the composite image used for this episode which combines AI-generated and real photos, has raised significant concerns about their potential impact on democracy. deepfakes have the ability to deceive people by presenting false information, posing a serious threat to the integrity of democratic processes.
Last year, videos circulated on social media purportedly showing American newscasters expressing support for Chinese Communist Party viewpoints on topics like U.S. gun violence. However, these newscasters were not real individuals but rather created using AI technology developed by a British company called Synthesia. This practice, known as microtargeting, involves tailoring deepfakes to align with the specific interests and preferences of targeted individuals. The advancement of generative AI has made the creation of such deepfakes easier and more cost-effective. In fact, Synthesia even claims on its website that their process is as simple as writing an email.
It indicates that a concern that emerged in 2016, specifically regarding the potential of supercharged data-driven microtargeting as exemplified by Cambridge Analytica, may now be facilitated by AI technology. For the first time, a study suggests that AI has the capability to make propaganda more persuasive to individuals.
The lack of AI governance mechanisms have contributed to democratic backsliding in countries around the world. One need not look far to witness the potential for AI to distort the political conversation around the world: For example, a viral deepfake video depicted Ukrainian President Volodymyr Zelenskyy surrendering to Russia, while pro-China bots shared videos featuring AI-generated news anchors from a fictitious organization called “Wolf News,” promoting narratives favorable to China’s government and critical of the United States.
This represents the first known case of a state-aligned campaign utilizing AI-generated videos to create fictional individuals. Incidents like these could proliferate in 2024. The “liar’s dividend” refers to the phenomenon where the mere existence of generative AI fosters an environment of doubt and suspicion. The potential impact of this dividend is expected to increase significantly in the future and the increase in AI-generated content could potentially expedite the erosion of trust within the broader ecosystem of election information.
In todays’ fraught geopolitical environment, as the schism between democracy and autocracy deepens, the stakes for election misinformation have never been higher. The erosion of trust in the integrity of elections has broader implications, as it undermines confidence in the democratic process itself. This issue extends beyond national borders and is a challenge that affects trust and confidence in the realms of information, elections, and democratic governance on a global scale.