Why is it hard to conquer disinformation?

We live in a world without borders, where humans can travel far and wide using fingerprints. What was once taught and debated as globalization has now become the new normal.

We live in a world without borders, where humans can travel far and wide using fingerprints. What was once taught and debated as globalization has now become the new normal. We live in a world where everything is digitized, and with immense information advantages comes information superiority. Disinformation is one of the key challenges in any society, ranging from simple matters such as spotting a fake from a fact, knowing whether the information is exaggerated, or discerning irrelevant information. Regardless, disinformation has now become a topic of discussion that merits further discussion due to the complexities associated with it.

What is disinformation? Hameleers (2023) cites disinformation as “the intentional creation and dissemination of false and/or deceptive information” (e.g., Bennett & Livingston, 2018; Dan et al., 2021; Freelon & Wells, 2020; Hancock & Bailenson, 2021). As disinformation entails a targeted attempt to deceive and persuade recipients, it is important to focus on intentions. The Overview—Disinformation—LibGuides at MIT Libraries (n.d.) mentions that disinformation is “false information deliberately and often covertly spread (as by the planting of rumors) to influence public opinion or obscure the truth.” When looking at these definitions, what is crystalline is that disinformation has “ill intent” in it, which is the mental element of knowing it to be untrue—the deliberation that makes it different from misinformation.

Disinformation occurs in many settings, “through a complex interaction of social media, online news sites, traditional media, and offline spaces,” as mentioned by Stepney & Lally (2024). Examples of disinformation include the infodemic that occurred during the Covid-19 pandemic, as well as recent events, such as: “As Ghana approaches its presidential election on December 7, researchers have uncovered a network of 171 bot accounts on X that use ChatGPT to write posts favorable to the incumbent political party, the New Patriotic Party (NPP)” (Haskins, 2024). In addition, Antoniuk (2023) mentions that the influence campaign linked to Russia spreads disinformation and propaganda in the U.S., Germany, and Ukraine through a vast network of social media accounts and fake websites.

The grim reality is that society is now battling the task of controlling and preventing disinformation, which has exacerbated, especially in digital realms. The widespread use of technology and the availability of digital equipment have allowed easy access to information. Information access, as well as disinformation, is not detrimental; however, when such information is utilised for an ill purpose or confidential information is divulged, the resultant effects are disastrous.

Disinformation has severe consequences, including the erosion of trust in public institutions, media, and governments, leading to widespread scepticism about legitimate information. Disinformation also causes societal polarization by exploiting divisions, particularly during critical events like elections or pandemics, and causes public harm. In addition, disinformation poses significant security risks, with foreign actors destabilizing societies and interfering in domestic affairs while also creating economic disruption by damaging reputations, influencing markets, and fostering scams. Additionally, disinformation threatens human rights by complicating the balance between combating harmful content and upholding freedoms like expression and access to information.

To prevent disinformation, governments, the private sector, and media companies engage in different measures as well as initiatives to prevent and mitigate disinformation. Yet, disinformation seems to take novel forms and escape regulations and other hurdles. According to the United Nations (n.d.), “The General Assembly and the Human Rights Council have both called for responses to the spread of disinformation to promote and protect and not to infringe on individuals’ freedom of expression and freedom to seek, receive, and impart information, as established by Article 19 of the Universal Declaration of Human Rights and Article 19 (1) of the International Covenant on Civil and Political Rights.” Additionally, countries have local legislation intended to prevent disinformation. Further, tech companies engage in fact-checking and fact-verification. Having said that, it is imperative to investigate the challenges associated with preventing disinformation.

A major hurdle in preventing disinformation is the ability to spread it without divulging who you are. The anonymity and use of pseudonyms and other fictitious identities create a lack of accountability, making it easier for malicious actors to produce and distribute misleading or harmful content without facing consequences. Further, social bots are also a common tool for amplifying disinformation. Hajli et al. (2021) mention that “AI-powered social bots can sense, think, and act on social media platforms in ways similar to humans. The challenge is that social bots can perform many harmful actions, such as providing wrong information to people, escalating arguments, perpetrating scams, and exploiting the stock market.” Therefore, with the development of AI in addition to anonymity benefits, disinformation becomes increasingly difficult to combat.

One of the most crucial factors that complicates the prevention of disinformation is proving malicious intent. Pielemeier (2020) cites a range of authors to show what makes proving intent a challenge. Determining a speaker’s intent is notoriously difficult and can be doubly difficult in online contexts, where nuance, jargon, and slang, not to mention the use of different languages—proliferate. This challenge is compounded by the fact that disinformation, by definition, often must also have the potential to cause “public harm.” This implication of seriousness and scale suggests that, in many instances, a large number of individuals have spread the disinformation, even though they may not share the same intent to deceive. In other words, even if the intent of the author can be established, it may still be nearly impossible to prove the intent of others who have subsequently shared the disinformation. For this reason, some efforts to address disinformation have emphasized “traceability”—the ability to identify where information originated and has since spread—in a manner that laws addressing hate speech and terrorist incitement have not.

Another obstacle that makes it difficult to tackle disinformation is the lack or inadequacy of media literacy. Many people lack the skills to critically evaluate information, especially in a digital landscape where sources can appear authoritative even when they are not.

As discussed, conquering disinformation remains an uphill battle due to its multifaceted nature and the evolving digital landscape, especially with AI. The anonymity offered by online platforms, the proliferation of social bots, the difficulty in proving malicious intent, and widespread media illiteracy present significant challenges. Disinformation has far-reaching consequences, including eroding trust in institutions, polarizing societies, and threatening security and human rights. It is crucial to reiterate that while efforts by governments, private sectors, and international organizations have made progress, these measures often fall short as malicious actors continue to adapt and exploit technological advancements.

Therefore, to effectively combat disinformation, a multi-pronged strategy is essential. Strengthening media literacy should be a priority, empowering individuals to critically evaluate information and identify credible sources. Governments, tech companies, and civil society must collaborate to develop and enforce ethical standards for combating disinformation while respecting freedom of expression. Advancements in technology should be leveraged responsibly, using AI to detect and mitigate disinformation while ensuring transparency and accountability. International cooperation is crucial to address cross-border disinformation, alongside legislative reforms that balance the need for regulation with safeguarding human rights. Finally, fostering research and innovation will provide deeper insights into disinformation trends and enable the development of proactive, adaptive solutions.

Charani Patabendige
Charani Patabendige
Charani LCM Patabendige is a Research Assistant and an Acting Research Analyst at the Institute of National Security Studies (INSS), the premier think tank on National Security established under the Ministry of Defence. The opinion expressed is her own and not necessarily reflective of the institute or the Ministry of Defence.