When Seeing Isn’t Believing: Netanyahu’s ‘Death’ and AI-Driven Misinformation

Recent rumours of Binyamin Netanyahu’s death saw millions of impressions within hours, going immediately viral on the internet and infecting the public zeitgeist.

Recent rumours of Binyamin Netanyahu’s death saw millions of impressions within hours, going immediately viral on the internet and infecting the public zeitgeist. These rumours outpaced official statements and reputable journalism. Online debates were notably centred not on the actual facts of the case, but on whether circulating videos were AI-generated or not – this may signal a worrying shift in the public’s concerns; from what is truth, to what is even real. This entire hoax embodies the paradox of the moment – genuine video evidence was dismissed as deepfake, this demonstrates a world where truth competes with the possibility of fabrication.

The modern information ecosystem is increasingly defined by generative AI tools, deepfake technologies becoming accessible to non-experts, and the integration of these into conflict propaganda. Platforms which are designed to prioritise speed and virality in order to maintain engagement are able to magnify these false claims in initial information vacuums. Misinformation and disinformation are both profitable to those spreading them, and increasingly legitimate mechanisms of information warfare. This erodes the public’s belief that traditionally reliable institutions can be trusted to provide accurate information.

The recent Netanyahu case exemplifies the structural weaknesses in modern media ecosystems and the crisis of trust stemming from the political uses of ambiguity. This crisis is reshaping democratic governance (which is increasingly interwoven with advanced technology), destabilising geopolitical information, and undermining the public’s ability to interpret reality.

How Misinformation about Political Leaders Destabilises Politics

False claims about a leader’s death or incapacitation can instantaneously produce political uncertainty. This, in turn, can alter markets, diplomatic calculations, military postures and domestic legitimacy. Modern misinformation spreads faster than institutional response capacity, this creates temporary ‘truth vacuums’, in which rumours operate as de facto reality.

The rumours of Netanyahu’s death initially came from Telegram posts, then spread to Twitter, then to TikTok via miscaptioned videos. The rumours were amplified by anti-Netanyahu domestic opponents, pro-Iranian or anti-Israeli accounts, and foreign state-aligned bot networks exploiting instability. Domestically, this can lead to public anxiety (fuelled by ambiguous government statements), political exploitation by opposition actors, and questions over continuity of command during an active security crisis. Internationally, allies must seek confirmation before issuing statements, and their delays can be interpreted as proof by the fast-moving internet. Adversaries can also use technologically-amplified doubts to calculate whether leadership chaos provides strategic opportunities.

There is no lack of historical precedent for misinformation being used as a political tool: in the 1960s-90s, the CIA and MI6 manipulated rumours surrounding Fidel Castro’s well-being, there has been periodic speculation since 2014 regarding Kim Jong-un’s death/disappearance, and plentiful misinformation during the Arab Spring about Mubarak’s health. Information has always been a political resource, but advanced technologies put this tool into the hands of all, creating a chasm in the political zeitgeist from which the truth is hard to recover. While misinformation is an old tactic, AI changes the velocity, credibility and scale of false narratives, and weaponised ambiguity in the hands of non-state actors allows their unfettered strategic influence.

AI-Driven Propaganda in Information Warfare

AI has arguably industrialised propaganda. What previously required intelligence agencies and state efforts can now be done by individuals or small groups with minimal costs – transforming the information battlefield.  Deepfake technology includes GANs, diffusion models and multimodal models and is largely characterised by the increasing difficulty of detection, as visual aids are used to falsely claim that real videos are fake, and vice versa. Its weaponisation has been seen in many recent conflicts, for example: the fake 2022 Zelensky surrender (an attempt to demoralise the population), the fake Pentagon explosion image that influenced the US stock markets (2023) and both the Iranian and Russian bot networks producing AI-generated battlefield imagery in Syria, Ukraine and Israel-Hamas conflict.

Algorithmic amplification mechanisms prioritise emotionally charged content, and the lack of pre-publication verification or friction enables mass spread before detection teams are able to respond. Platform responses (labelling, demotions, detection tools) remain inconsistent and reactive – with Twitter’s policies increasingly allowing misinformation to spread without any proper mitigation. This is an interesting link to Baudrillard’s notions of simulacra and hyperreality; we are no longer able to tell truth from imitation, and the algorithms we interact with increasingly show us more illusion than truth.

In terms of policy, the EU AI Act proposes transparency requirements for synthetic media – but deepfake producers simply evade regulation by operating outside of Europe. More invasive and intense regulation would be required to properly prevent misinformation, but questions around freedoms of expression and freedoms from censorship undoubtedly arise when the hands-on approach comes into play. Rapid innovation outpaces regulatory frameworks, and there is no globally-enforced agreement on watermarking or provenance tracking.

Consequences for Democratic Processes and Public Discourse

AI misinformation degrades democratic deliberation by eliminating shared reality, reducing confidence in political processes and making corrective institutions appear partisan or unreliable. Authoritative video evidence is dismissed as ‘fake’ (enabling political actors to avoid accountability), while actually fake videos are taken as evidence in the public domain. In Netanyahu’s case, political opponents and conspiracy communities questioned genuine proof-of-life clips by pointing to minor visual points. These fabricated scandals are also often timed for maximum political disruption e.g. during elections. Voter confusion and cynicism can suppress turnout, allowing fringe views more electoral sway.

When individuals cannot trust any information, they disengage – the notion that nothing can be truly trusted becomes a default stance. For the more politically extreme, especially relevant in highly polarised contemporary political environments, the median voter opts out of the information battlefield while the more radical may be willing to only accept material (real or fake) which aligns with their views. Diaspora communities and demographics – emboldened by identity politics/culture wars – who are active online can then amplify these competing narratives at scale, and the international audience receives swathes of contradictory ‘evidence’, making the political sphere increasingly unintelligible. Democratic alliances require shared intelligence assessments, and misinformation disrupts this coordination of international responses.

How Repeated Exposure Weakens Confidence in Mainstream Media

Misinformation persisting over time does cumulative damage, as repeated false claims build a meta-narrative that nothing in the media is trustworthy. Aside from disengagement, this can also push citizens into interaction with fringe sources. The algorithms from which we get most of our news emphasise volume, speed and repetition, not internal consistency. Netanyahu rumours showed how dozens of minor posts create an overall impression of general uncertainty.

Repetition of any information increases believability, due to frequency and confirmation biases, regardless of their source. Individuals also tend to accept information that fits their identity or political ideology – thus people’s perceptions of the real world become increasingly divided along ideological lines, and selective exposure reinforces pre-existing distrust. Corrections can often serve to entrench original incorrect beliefs among strongly committed partisans. Historical failures, quickly amassing, like WMD reporting regarding Iraq and missteps during Covid-19, are used rhetorically as justification for rejecting mainstream journalism altogether.

Audiences increasingly rely on sources other than the mainstream, looking more and more to influencers, Telegram channels, Discord servers, and niche outlets. Verification norms differ drastically across platforms, meaning one outlet will claim a story’s truth while another will vehemently deny it. These knowledge silos are often hard to fix, since untruth piles on top of untruth, and the web thickens.

Conclusion

The Netanyahu misinformation episode demonstrates the interplay of political destabilisation, AI-powered propaganda infrastructure, eroding democratic deliberation and long-term media distrust. These are not isolated phenomena, rather different elements of a cumulative informational crisis. Netanyahu’s rumoured death encapsulates the broader epistemic breakdown: genuine evidence is distrusted, fabricated evidence is widely believed, and geopolitical tensions are intensified.

Looking to the future, broad responses are needed to combat this process. Media literacy education must begin to focus on deepfake awareness and verification practices. Platforms must also be held accountable for what they house on their sites, perhaps using mandatory provenance tools or more robust detection pipelines. International regulatory coordination must also improve in response, using common standards for synthetic media disclosure. Journalism also has a duty to act, showing methods, sources and verification to prevent the virality of misinformation.

Overall, democracy depends – at least in part – in the presence of widely-accepted facts. When AI-driven misinformation erodes trust in shared facts, and shared reality, this erodes not only media credibility, but also the public’s ability to effectively co-exist and govern themselves.

Lexy Reid
Lexy Reid
Studying Politics and International Relations at UCL, and hoping to complete a masters in political literature. My interests lie in development studies and neo-colonialism