In today’s fast-paced digital world, disinformation has become a major challenge. From political interference to health crises, false information affects how people think, act, and even how the political system functions. A clear example of this is the COVID-19 pandemic, where rumors about vaccines and the virus spread rapidly on social media, fueling vaccine hesitancy and eroding trust in science. On top of that, some governments used these divides to manipulate public opinion and create chaos. These examples show how disinformation, when used deliberately, can serve political and ideological agendas.
What makes disinformation so powerful is its ability to tap into the way humans think. We are all prone to cognitive biases, like confirmation bias—where we tend to seek out information that supports what we already believe—which helps false claims spread. Our tendency to focus on emotionally charged stories further solidifies these falsehoods, and fear, especially in times of uncertainty, can impair our judgment. Research in neuroscience shows that these cognitive mechanisms, combined with social media platforms designed to keep us engaged, create the perfect ground for disinformation to flourish.
The Neuroscience Behind Disinformation
Contribution of Cognitive Biases: One of the biggest cognitive biases driving disinformation is confirmation bias. This is when people look for information that supports their existing beliefs and ignore anything that contradicts them. Social media makes this worse by pushing content based on what we have engaged with before, creating echo chambers where we are exposed mainly to information that reinforces what we already think. Studies on political ads show that people tend to focus on familiar narratives, even if they are not entirely accurate, because of their personal ideologies. Eye-tracking research also shows that we spend more time engaging with content that fits our beliefs, which helps fake information spread more easily through visual cues.
Another bias that feeds false information is the bandwagon effect, where people believe something simply because others do. Though less studied than confirmation bias, it is noticeable in how people interact with trending false information online. The backfire effect—where correcting false information actually makes people hold onto their false beliefs stronger—also keeps false information alive. This happens because people tend to stick to sources that match their identity and avoid those that challenge their views.
Selective attention is also a big factor in the spread of false information. People often focus on emotional or personally relevant stories while ignoring objective facts. This was especially clear during the COVID-19 pandemic, where fear made people more vulnerable to various types of information. Studies show that when emotions run high, the human ability to think critically drops, making it easier to misidentify and share false information. Fear reduces our ability to analyze information, which is critical to separating fact from fiction.
The amygdala, which controls fear and emotional responses in the brain, plays a key role here. It makes humans more likely to believe and spread fear-based false content. The pandemic, for example, showed how fear-driven thinking can polarize people and impair their ability to make critical judgments. Not only do people accept misleading information, but they often share it too, trying to resolve their uncertainty quickly without checking its accuracy.
The Role of Dopamine and Reward Systems: False information thrives because it taps into our brain’s reward system, triggering dopamine responses when we engage with it. Social media platforms amplify this by offering instant feedback, likes, shares, and comments, creating a cycle where we are rewarded for consuming and spreading false claims. The more we hear a lie, the more familiar it feels, making it seem true (the “illusion of truth” effect). This is even more potent when any information aligns with our political beliefs or ideology, reinforcing echo chambers where we only encounter ideas that confirm our views.
Emotion also plays a huge role as disinformation is often designed to evoke strong emotional reactions, which override our logical thinking. We are more likely to believe things that make us feel something intense. The emotional impact can even create false memories, making us more certain of fake events, especially when they support our biases. Fear, in particular, is a powerful tool—our brains react strongly to fear-based content, which makes it both more believable and more shareable. In personal messaging apps, this dynamic is even more potent, as we share emotionally charged content within smaller, ideologically similar groups, further reinforcing false beliefs.
The Role of Digital Platforms in Amplifying Deceptive Narratives
Algorithmic Bias and Echo Chambers: Social media platforms thrive on algorithms that prioritize content based on user behavior, such as likes, comments, and shares. This engagement-driven approach rewards sensational, emotionally charged, or polarizing content, as it sparks more interaction. As a result, users are mostly exposed to content that aligns with their existing beliefs, creating “filter bubbles” or echo chambers. These spaces limit exposure to opposing viewpoints or fact-checked information, deepening ideological divides and fostering a cycle of false information. The spread of deliberate false content is amplified by the viral nature of social media. When posts with high engagement—often sensationalized or misleading—gain traction quickly, they reach a large audience without being critically evaluated. The algorithms reinforce this by showing users more of what they have interacted with before, narrowing their worldview and isolating them from diverse perspectives.
Short video platforms, in particular, intensify this effect. Their rapid content distribution and engagement-driven algorithms often amplify misleading content. Research shows that the echo chamber effect is strong on these platforms, with users clustering into groups where false information spreads quickly, especially in politically motivated contexts.
Furthermore, the lack of editorial oversight on social media means that falsehoods can circulate freely, making it hard for users to differentiate between reliable and unreliable sources. Combined with the lack of transparency in how feeds are curated, users have little control over the content they encounter. This environment allows misleading information to flourish, especially when coordinated disinformation campaigns target users with politically charged narratives, reinforcing existing biases and making it harder to break free from the cycle of disinformation.
AI, Deepfakes, and Information Laundering: The rise of deepfake videos and AI-generated fake news has drastically reshaped the disinformation landscape. These technologies, powered by advanced AI models, can produce incredibly realistic, yet entirely fake, audio-visual content. Deepfakes, which use AI to manipulate video and audio, have become increasingly convincing, making it harder for audiences to tell what is real from what is not. The impact is far-reaching, as these manipulated videos can deceive the public, influence political conversations, and erode trust in institutions. Because visuals are often seen as more credible than text, deepfakes are especially powerful tools for spreading falsehoods.
This surge in AI-driven disinformation also ties into a larger strategy called information laundering. This involves spreading false narratives through seemingly trustworthy sources—like fake think tanks or biased media outlets—to give them an air of legitimacy. Research shows that state-backed media organizations often amplify these false stories, mixing them with selective facts to hide their origins and maintain deniability. The combination of accessible deepfake technology and information laundering allows malicious actors to present false content as credible, subtly shaping public perception worldwide. As AI-generated media improves, the ability of those behind these disinformation campaigns to manipulate political and social realities only grows stronger.
State-Sponsored Disinformation
State actors often leverage disinformation to shape political, social, and ideological landscapes, manipulating public opinion and influencing global narratives. By exploiting cognitive biases and amplifying partisan divides, these actors turn fake information into a powerful tool for advancing political or security goals. In today’s digital world, state actors can target specific populations with surgical precision, capitalizing on the way people process and believe information.
The effect of disinformation grows when it taps into existing social and political divisions. State actors often know exactly who to target—groups more likely to fall for narratives that play into their existing biases. For example, a carefully crafted false story can inflame political tensions or shift public perception of leaders and policies. The power of disinformation lies in its ability to resonate with a target audience’s fears and values, making it feel authentic and trustworthy to consumers.
In the digital age, state actors amplify these tactics through fake social media accounts, bots, and AI-generated content. Bots and trolls mimic real human activity (interaction) to create the illusion of widespread grassroots support. AI technologies, like deepfakes or fake news stories, add a layer of authenticity, blurring the line between fact and fiction. These tools make it easier for disinformation to spread, often deceiving even the most cautious or discerning audiences.
Fear and uncertainty are key emotional triggers in these campaigns. By stirring up anxiety or anger, disinformation can create a false sense of crisis, dividing societies further. In some cases, fabricated events—like the 2020 Saudi-backed campaign about a coup in Qatar—play into existing national or regional tensions, creating confusion and eroding trust in political institutions. When these fake stories make their way into mainstream media, they further solidify their impact, spreading the disinformation across multiple platforms.
Moreover, the creation of fake journalists, media outlets, and personas adds another layer of deception. By constructing seemingly legitimate narratives through fake accounts or AI-generated images, states can manipulate public perception more subtly. One such example is the Internet Research Agency’s creation of PeaceData, a fake publication that used AI-generated images and real freelance writers. These strategies make it harder to distinguish real news from manipulated content, confusing the lines between genuine journalism and disinformation.
Counter-Strategies: A Neuroscience-Based Approach to Building Resilience
a. Cognitive Immunity and Critical Thinking
The Role of Media Literacy in Reducing Susceptibility to Misleading Content
Cognitive immunity is key to protecting people from misleading fake content, helping them recognize and resist harmful ideas, especially those circulating on social media and digital platforms. The idea behind cognitive immunology (CI) is similar to how the body’s immune system defends against harmful germs: just as the immune system filters out threats, cognitive immunity helps filter out false or damaging information. This analogy helps explain why interventions aimed at building cognitive immunity are so important in a world full of misleading news and divisive ideologies.
To strengthen cognitive immunity, there are many exemplary interventions focused on improving media literacy and critical thinking skills. One example is the Ethos BT program during Colombia’s 2022 presidential election, which addressed psychological factors linked to false information, like trust, perception of bias, and discomfort with uncertainty. These interventions used behavioral insights to help people think more critically, ultimately reducing their vulnerability to fake news. The study found that video-based interventions were particularly effective at making people less trusting of fake news.
In addition, some programs targeting adolescents, like the pro-social fake news approach, use cognitive dissonance and pro-social values to create long-term change. These programs encourage young people to become experts who teach their family members how to spot fake news, leveraging social good to boost critical thinking. The success of these programs relies on students’ engagement and motivation, showing that the more motivated a person is to think critically, the better they can identify false information.
The need for critical thinking in education is clear, with calls for integrating cognitive immunology-based lessons into school curricula and journalism programs. By teaching students how to evaluate information critically and recognize their own biases, these programs can help create a generation that is better equipped to resist fake information. Encouraging environments that promote reflection, open dialogue, and community engagement can further enhance cognitive immunity, which is essential for tackling the spread of harmful false ideologies and news.
Encouraging Intellectual Humility: Teaching People to Question
Intellectual humility is crucial for encouraging critical thinking and strengthening cognitive immunity. It helps people recognize the limits of their own knowledge and beliefs, which is essential for questioning biases and being open to opposing views. Individuals who practice intellectual humility are more willing to engage with conflicting evidence and rethink their positions based on new information. By acknowledging that their beliefs are fallible, these individuals develop a deeper understanding of different perspectives and cultivate a more open-minded approach.
Critical thinking thrives when people embrace intellectual humility because it encourages them to focus on the quality of evidence and consider alternative viewpoints. This leads to more thoughtful and informed decision-making, as intellectually humble individuals are more likely to consult diverse sources and reflect on ideas that challenge their own.
Mindfulness practices also play a key role in fostering intellectual humility. By helping people become more aware of their thoughts and judgments, mindfulness reduces impulsive reactions and makes individuals more receptive to new perspectives. Engaging in mindfulness encourages curiosity and an understanding of how interconnected we all are, allowing for a more flexible, open-minded approach to problem-solving.
Moreover, strategies like metacognitive training and promoting a growth mindset can further develop intellectual humility. These strategies help individuals question their assumptions and gain a more balanced view of their abilities. As people become more aware of their cognitive biases and knowledge limits, they’re better equipped to engage in critical thinking and make decisions that reflect a broader, more informed understanding of the world.
b. Algorithmic and Policy Interventions:
Redesign Recommendation Systems to Ensure Accuracy
Digital platforms can redesign recommendation systems to prioritize accuracy over engagement by adjusting their algorithms to reduce the spread of false or misleading information. Currently, algorithms are geared toward maximizing user interaction, but shifting the focus to content quality and factual accuracy could help limit disinformation. However, this approach may face resistance due to platforms’ reliance on engagement to drive revenue. One possible solution is to offer users the option to opt into algorithms that prioritize accuracy, though it’s unclear how many would choose this. Additionally, improving content moderation tools to flag or downrank misleading content could be more effective, as research shows that content labeling helps reduce the spread of false information. Clearer, more assertive labeling techniques could also counter misleading claims more successfully.
Regulatory Approaches to Curb State-sponsored Information Operations
To combat state-sponsored disinformation campaigns, regulatory efforts could focus on increasing transparency around how recommendation systems function. For example, platforms could be required to audit and assess their recommender algorithms, looking specifically at their potential role in spreading disinformation. The EU’s Digital Services Act (DSA), adopted in 2022, already includes provisions for auditing these systems to ensure they comply with regulations. Such audits could focus on ensuring algorithms are not amplifying harmful or misleading content. Additionally, platforms might be required to provide clearer explanations of how their algorithms work, making the entire process more transparent and accountable to users.
In conclusion, fake information isn’t just a byproduct of digital connectivity; it thrives due to deep-rooted cognitive vulnerabilities in human psychology. People are more likely to believe and share such information, and the brain’s reward system often reinforces engagement with misleading content, which worsens the issue. Emotional triggers sometimes impair rational thinking, making individuals more vulnerable. Social media algorithms capitalize on these cognitive biases by prioritizing engagement over accuracy, creating echo chambers where false information spreads more easily. The rise of AI-generated disinformation has further complicated efforts to distinguish truth from falsehood, allowing cognitive weaknesses to be exploited on an unprecedented scale.
As AI technology advances, cognitive warfare will become more sophisticated, with state and non-state actors using it to manipulate public opinion more precisely. AI’s ability to create hyper-realistic content will challenge traditional fact-checking methods, making it harder to tell what is real. Disinformation campaigns will not only target individuals but also erode trust in political institutions, fueling polarization and weakening political systems. In this shifting landscape, building cognitive resilience is crucial—both individual awareness and systemic safeguards are needed to combat disinformation.