Connect with us

Intelligence

Conspiracy Theories, Fake News and Disinformation: Why There’s So Much of It and What We Can Do About it

Published

on

In March 2019, under the aegis of the United States Department of State, a group of researchers released a report called “Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age.” The report mostly focused on foreign states’ propaganda, disinformation and fake news. Taking into account the upcoming US elections, the report can provide practical recommendations for policymakers and stakeholders.

The report begins with a horrific story broadcasted on the Russian state-owned “Channel One” in 2014. The story covered how Ukrainian soldiers crucified a child in front of its mother’s eyes. Later, this story was proved to be fake, and there was neither a killed child, nor shocked mother. Still, the story went viral. It had reached a much broader audience on social mediathan it did on television.

The authors refer to that story as “an example of Kremlin-backed disinformation campaign.” The authors of the report continued to state that “in subsequent years, similar tactics would again be unleashed by the Kremlin on other foreign adversaries, including the United States during the lead-up to the 2016 presidential election.”

Undoubtedly, the fake story did a lot of damage to the reputation of Channel One and other state-funded media. It is clear why authors begin with that story — it was poorly done, obviously faked and quickly exposed. However, it showed how effective and powerful social media could be (despite all of the reputation risks). There is also an important point highlighted in the report, particularly that “the use of modern-day disinformation does not start and end with Russia. A growing number of states, in the pursuit of geopolitical ends, are leveraging digital tools and social media networks to spread narratives, distortions, and falsehoods to shape public perceptions and undermine trust in the truth.” We are used to research, dedicated to propaganda and fake news issues, that establishes only Russia is responsible for disinformation and fake news. This report, on the other hand, addresses propaganda and disinformation as a comprehensive problem.

In the introduction, the authors claim that disinformation is a problem that consists of two major factors: technology giants and their impact and the psychological element of how people consume information on the Internet. Technology giants have disrupted disinformation and propaganda, and the proliferation of social media platforms made the information ecosystem vulnerable to foreign, state-sponsored actors. “The intent [of bad foreign actors] is to manipulate popular opinion to sway policy or inhibit action by creating division and blurring the truth among the target population.”

Another important aspect of disinformation highlighted in the report is the abuse of fundamental human biases and behaviour. The report states that “people are not rational consumers of information. They seek swift, reassuring answers and messages that give them a sense of identity and belonging.” The statement is proved by the research showing that, on average, a false story reaches 1 500 people six times more quickly than a factual account. And indeed, conspiracy stories have become something usual these days. We see it has become even more widespread during the current pandemic — 5G towers, Bill Gates and “evil Chinese scientists” who supposedly invented the coronavirus became scapegoats. And there are a lot more paranoid conspiracy stories spreading on the Internet.

What is the solution? Authors do not blame any country, tech giants or the behavior of people. Rather the opposite, they suggest that the solution should be complex: “the problem of disinformation is therefore not one that can be solved through any single solution, whether psychological or technological. An effective response to this challenge requires understanding the converging factors of technology, media, and human behaviours.”

Define the Problem First

What is the difference between fake news and disinformation? How does disinformation differ from misinformation? It is a rather rare occasion that reports give a whole chapter dedicated to terminology. And the report “The Weapons of Mass Distraction” definitely provides readers with a vast theoretical background. Authors admit that there are a lot of definitions, and it is difficult to ascribe the exact parameters to disinformation. However, it states that “misinformation is generally understood as the inadvertent sharing of false information that is not intended to cause harm, just as disinformation is widely defined as the purposeful dissemination of false information.”

Psychological Factors

As it was mentioned in the beginning, authors do not attach labels and do not focus on one side of the problem. A considerable part of the report is dedicated to psychological factors of disinformation. The section helps readers understand behavioural patterns of how humans consume information, why it is easy to fall for a conspiracy theory, and how to use this information to prevent the spread of disinformation.

The findings are surprising. There are several cognitive biases that make disinformation easy to flourish. And the bad news is that there is little we can do about it.

First of all, confirmation bias and selective exposure lead people to prefer information that confirms their preexisting beliefs make information consistent with one’s preexisting beliefs more persuasive. Moreover, confirmation bias and selective exposure work together with other naïve realism that “leads individuals to believe that their perception of reality is the only accurate view and that those who disagree are simply uninformed or irrational.”

In reality, these cognitive biases are widely used by tech giants. That doesn’t mean that there is a conspiracy theory behind it. That means that it is easy for big tech companies to sell their products using so-called “filter bubbles.” Such a bubble is an algorithm that selectively guesses what information a user would like to see based on information about the user, such as location, past click-behaviour and search history. Filter bubbles work well on such websites like YouTube. A Wall Street Journal investigation found that YouTube’s recommendations often lead users to channels that feature conspiracy theories, partisan viewpoints and misleading videos, even when those users haven’t shown interest in such content.

These days, the most popular way to counter misinformation is fact-checking and debunking the false information. In the report, the researchers presented some evidence that the methods we are used to employing, may not be that effective. “Their analysis determined that users are more active in sharing unverified rumours than they are in later sharing that these rumours were either debunked or verified. The veracity of information, therefore, appears to matter little. A related study found that even after individuals were informed that a story had been misrepresented, more than a third still shared the story.”

The other research finding is that “participants who perceived the media and the word “news” negatively were less likely than others to identify a fake headline and less able to distinguish news from opinion or advertising.” Obviously, there is a reason for that. It’s a lack of trust. The public has low trust towards journalists as a source of information about the coronavirus, says the latest research. Additionally, according to the American Press Institute, only 43 per cent of people said they could easily distinguish factual news from opinion in online-only news or social media. Thus, the majority of people can hardly distinguish news from opinions in a time when trust towards journalism is at its historical minimum. It is therefore no surprise that people perceive news that negatively.

This can have implications for news validation. The report states it can differ from country to country. “Tagging social media posts as “verified” may work well in environments where trust in news media is relatively high (such as Spain or Germany), but this approach may be counterproductive in countries where trust in news media is much lower (like Greece).

A vast research basis also reveals the following essential findings. First, increasing online communities’ exposure to different viewpoints is rather counterproductive. The research presented in the report found that conservative people become more conservative and liberals become more liberal.

Second, the phenomenon called belief perseverance, which is the inability of people to change their minds even after being shown new information, means that facts can matter little in the face of strong social and emotional dynamics.

Third, developing critical thinking skills and increasing media literacy may also be counterproductive or have minimal use. Research shows us that “many consumers of disinformation already perceive themselves as critical thinkers who are challenging the status quo.” Moreover, even debunking false messages cannot be that effective. Showing corrective information did not always reduce the participant’s belief in misinformation. Besides, “consumers of fake news were presented with a fact-check, they almost never read it.”

What can be done here? Authors provide the reader with a roadmap for countering misleading information. Although the roadmap, which is also based on researches, can have very limited use, according to the report.

The main idea is to be proactive. While debunking false messages, developing critical thinking, and other tools have minimal potential, some psychological interventions can help in building resilience against disinformation. Authors compare disinformation and misinformation as a disease, and they propose we need a vaccine that builds resilience to a virus. This strategy means that people should be warned “that they may be exposed to information that challenges their beliefs, before presenting a weakened example of the (mis)information and refuting it.”

Another aspect of the roadmap is showing different perspectives, “which allows people to understand and overcome the cognitive biases that may render them adversarial toward opposing ideas.” According to the authors, this approach should focus less on the content of one’s thoughts and more on their structure. The fact that certain factors can make humans susceptible to disinformation can also be used as part of the solution.

What About the Tech Giants?

The authors admit that social media platforms should be playing a central role to neutralize online disinformation. Despite the fact that tech giants demonstrated their willingness to address disinformation, their incentives are not always prioritized to limit disinformation. Moreover, their incentives are aligned with spreading more of it because of its business model. “Users are more likely to click on or share sensational and inaccurate content; increasing clicks and shares translates into greater advertising revenue. The short-term incentives, therefore, are for the platforms to increase, rather than decrease, the amount of disinformation their users see.”

The technological section of the report is split into three parts dedicated to three tech companies — Facebook, Twitter and Google. While the report focuses on what companies have already done to counter disinformation, we will highlight only the recommendations and challenges that still remain.

Despite all the incentives that have been implemented by Facebook in recent years, the social media platform still remains vulnerable for disinformation. The main vulnerability is behind its messaging apps. WhatsApp has been a great source of disinformation during the Rohingya crisis in 2018 and during the Brazilian presidential elections in the same year. The second vulnerability lies in third-party fact-checking services staffed by human operators. Human operators are struggling to handle the volume of the content: “fake news can easily go viral in the time between its creation and when fact-checkers are able to manually dispute the content and adjust its news feed ranking.”

Despite all the vulnerabilities, including a colossal bot network, Twitter became more influential in countering the threat using such technologies like AI. The question of how proactive the company will be countering the threat still remains. Yet, Twitter now uses best practices, according to the report.

With its video-sharing platform YouTube and ad platform, YouTube might be the most vulnerable platform. The website, with its personalized recommendation algorithm (filter bubbles), has faced strong criticism for reinforcing the viewers’ belief that the conspiracy is, in fact, real. However, YouTube announced in 2019 that it would adjust its algorithms to reduce recommendations of misleading content.

However, it is not just the tech giants who should take responsibility for disinformation. According to the report, it’s countries who should bear the ultimate responsibility for “defending their nations against this kind of disinformation.” Yet, since the situation is still in private hands, what can the government do here?

For example, they could play a more significant role in engaging in regulating social media companies. According to the report, it doesn’t mean total control of social media companies. However, authors admit that this solution may have some implications for possible restriction of freedom of speech and outright censorship, and there is no easy and straightforward way to solve this complex problem.

What can we do about it? According to the report, technology will change, but the problem will not be solved within the next decade. And the fact is, we should learn how to live with the disinformation. At the same time, public policies should focus on mitigating disastrous consequences while maintaining civil liberties, freedom of expression and privacy.

The report provides readers with quite a balanced approach to the problem. While other research projects attach labels on countries or technologies, the authors of the report “Weapons of Mass Distraction” admit the solution will not be easy. It is a complex problem that will require a complex solution.

From our partner RIAC

Continue Reading
Comments

Intelligence

An Underdeveloped Discipline: Open-Source Intelligence and How It Can Better Assist the U.S. Intelligence Community

Published

on

Open-Source Intelligence (OSINT) is defined by noted intelligence specialists Mark Lowenthal and Robert M. Clark as being, “information that is publicly available to anyone through legal means, including request, observation, or purchase, that is subsequently acquired, vetted, and analyzed in order to fulfill an intelligence requirement”. The U.S. Naval War College further defines OSINT as coming from, “print or electronic form including radio, television, newspapers, journals, the internet, and videos, graphics, and drawings”. Basically, OSINT is the collection of information from a variety of public sources, including social media profiles and accounts, television broadcasts, and internet searches.

Historically, OSINT has been utilized by the U.S. since the 1940s, when the United States created the Foreign Broadcast Information Service (FBIS) which had the sole goal (until the 1990s) of, “primarily monitoring and translating foreign-press sources,” and contributing significantly during the dissolution of the Soviet Union. It was also during this time that the FBIS transformed itself from a purely interpretation agency into one that could adequately utilize the advances made by, “personal computing, large-capacity digital storage, capable search engines, and broadband communication networks”. In 2005, the FBIS was placed under the Office of the Director of National Intelligence (ODNI) and renamed the Open Source Center, with control being given to the CIA.

OSINT compliments the other intelligence disciplines very well. Due to OSINT’s ability to be more in touch with public data (as opposed to information that is more gleaned from interrogations, interviews with defectors or captured enemies or from clandestine wiretaps and electronic intrusions), it allows policymakers and intelligence analysts the ability to see the wider picture of the information gleaned. In Lowenthal’s own book, he mentions how policymakers (including the Assistant Secretary of Defense and one of the former Directors of National Intelligence (DNI)) enjoyed looking at OSINT first and using it as a “starting point… [to fill] the outer edges of the jigsaw puzzle”.

Given the 21stcentury and the public’s increased reliance upon technology, there are also times when information can only be gleaned from open source intelligence methods. Because “Terrorist movements rely essentially on the use of open sources… to recruit and provide virtual training and conduct their operations using encryption techniques… OSINT can be valuable [in] providing fast coordination among officials at all levels without clearances”. Intelligence agencies could be able to outright avoid or, at a minimum, be able to prepare a defense or place forces and units on high alert for an imminent attack.

In a King’s College-London research paper discussing OSINT’s potential for the 21stcentury, the author notes, “OSINT sharing among intelligence services, non-government organizations and international organizations could shape timely and comprehensive responses [to international crises or regime changes in rogue states like Darfur or Burma],” as well as providing further information on a country’s new government or personnel in power. This has been exemplified best during the rise of Kim Jong-Un in North Korea and during the 2011 Arab Spring and 2010 earthquake that rocked Haiti. However, this does not mean that OSINT is a superior discipline than other forms such as SIGINT and HUMINT, as they are subject to limitations as well. According to the Federation of American Scientists, “Open source intelligence does have limitations. Often articles in military or scientific journals represent a theoretical or desired capability rather than an actual capability. Censorship may also limit the publication of key data needed to arrive at a full understanding of an adversary’s actions, or the press may be used as part of a conscious deception effort”.

There is also a limit to the effectiveness of OSINT within the U.S. Intelligence Community (IC), not because it is technically limited, but limited by the desire of the IC to see OSINT as a full-fledged discipline. Robert Ashley and Neil Wiley, the former Director of the Defense Intelligence Agency (DIA) and a former Principal Executive within the ODNI respectively, covered this in a July article for DefenseOne, stating “…the production of OSINT is not regarded as a unique intelligence discipline but as research incident to all-source analysis or as a media production service… OSINT, on the other hand, remains a distributed activity that functions more like a collection of cottage industries. While OSINT has pockets of excellence, intelligence community OSINT production is largely initiative based, minimally integrated, and has little in the way of common guidance, standards, and tradecraft… The intelligence community must make OSINT a true intelligence discipline on par with the traditional functional disciplines, replete with leadership and authority that enables the OSINT enterprise to govern itself and establish a brand that instills faith and trust in open source information”. This apprehensiveness by the IC to OSINT capabilities has been well documented by other journalists.

Some contributors, including one writing for The Hill, has commented that “the use of artificial intelligence and rapid data analytics can mitigate these risks by tipping expert analysts on changes in key information, enabling the rapid identification of apparent “outliers” and pattern anomalies. Such human-machine teaming exploits the strengths of both and offers a path to understanding and even protocols for how trusted open-source intelligence can be created by employing traditional tradecraft of verifying and validating sourcing prior to making the intelligence insights available for broad consumption”. Many knowledgeable and experienced persons within the Intelligence Community, either coming from the uniformed intelligence services or civilian foreign intelligence agencies, recognize the need for better OSINT capabilities as a whole and have also suggested ways in which potential security risks or flaws can be avoided in making this discipline an even more effective piece of the intelligence gathering framework.

OSINT is incredibly beneficial for gathering information that cannot always be gathered through more commonly thought of espionage methods (e.g., HUMINT, SIGINT). The discipline allows for information on previously unknown players or new and developing events to become known and allows policymakers to be briefed more competently on a topic as well as providing analysts and operators a preliminary understanding of the region, the culture, the politics, and current nature of a developing or changing state. However, the greatest hurdle in making use of OSINT is in changing the culture and the way in which the discipline is currently seen by the U.S. Intelligence Community. This remains the biggest struggle in effectively coordinating and utilizing the intelligence discipline within various national security organizations.

Continue Reading

Intelligence

Online Radicalization in India

Published

on

Radicalization, is a gradual process of developing extremist beliefs, emotions, and behaviours at individual, group or mass public levels. Besides varied groups, it enjoys patronization, covertly and even overtly from some states. To elicit change in behavior, beliefs, ideology, and willingness, from the target-group, even employment of violent means is justified. Despite recording a declination in terror casualties, the 2019 edition of the Global Terrorism Index claims an increase in the number of terrorism-affected countries. With internet assuming a pivotal role in simplifying and revolutionizing the communication network and process, the change in peoples’ lives is evident. Notably, out of EU’s 84 %, daily internet using population, 81%, access it from home (Eurostat, 2012, RAND Paper pg xi). It signifies important changes in society and extremists elements, being its integral part, internet’ role, as a tool of radicalization, cannot be gainsaid. Following disruption of physical and geographical barriers, the radicalized groups are using the advancement in digital technology:  to propagate their ideologies; solicit funding; collecting informations; planning/coordinating terror attacks; establishing inter/intra-group communication-networks; recruitment, training and media propaganda to attain global attention.  

               Indian Context

In recent times, India has witnessed an exponential growth in radicalization-linked Incidents, which apparently belies the official figures of approximate 80-100 cases. The radicalization threat to India is not only from homegrown groups but from cross-border groups of Pakistan and Afghanistan as well as global groups like IS. Significantly, Indian radicalized groups are exploiting domestic grievances and their success to an extent, can mainly be attributed to support from Pakistani state, Jihadist groups from Pakistan and Bangladesh. The Gulf-employment boom for Indian Muslims has also facilitated radicalization, including online, of Indian Muslims. A close look at the modus operandi of these attacks reveals the involvement of local or ‘homegrown’ terrorists. AQIS formed (2016) ‘Ansar Ghazwat-ul-Hind’ in Kashmir with a media wing ‘al-Hurr’.

IS announced its foray into Kashmir in 2016 as part of its Khorasan branch. In December 2017 IS in its Telegram channel used hashtag ‘Wilayat Kashmir’ wherein Kashmiri militants stated their allegiance with IS. IS’ online English Magazine ‘Dabiq’ (Jan. 2016) claimed training of fighters in Bangladesh and Pakistan for attacks from western and Eastern borders into India.Though there are isolated cases of ISIS influence in India, the trend is on the rise. Presently, ISIS and its offshoots through online process are engaged in spreading bases in 12 Indian states. Apart from southern states like Telangana, Kerala, Andhra Pradesh, Karnataka, and Tamil Nadu — where the Iran and Syria-based terrorist outfit penetrated years ago — investigating agencies have found their links in states like Maharashtra, West Bengal, Rajasthan, Bihar, Uttar Pradesh, Madhya Pradesh, and Jammu and Kashmir as well. The Sunni jihadists’ group is now “most active” in these states across the country.

               Undermining Indian Threat

Significantly, undermining the radicalization issue, a section of intelligentsia citing lesser number of Indian Muslims joining al-Qaeda and Taliban in Afghanistan and Islamic State (IS) in Iraq, Syria and Middle East, argue that Indian Muslim community does not support radicalism-linked violence unlike regional/Muslim countries, including Pakistan, Afghanistan, Bangladesh and Maldives. They underscore the negligible number of Indian Muslims, outside J&K, who supports separatist movements. Additionally, al- Qaeda and IS who follows the ‘Salafi-Wahabi’ ideological movement, vehemently oppose ‘Hanafi school’ of Sunni Islam, followed by Indian Muslims. Moreover, Indian Muslims follows a moderate version even being followers of the Sunni Ahle-Hadeeth (the broader ideology from which Salafi-Wahhabi movement emanates). This doctrinal difference led to the failure of Wahhabi groups online propaganda.  

               Radicalisation Strategies/methods: Indian vs global players

India is already confronting the online jihadist radicalization of global jihadist organisations, including al-Qaeda in the Indian Subcontinent (AQIS), formed in September 2014 and Islamic State (IS). However, several indigenous and regional groups such as Indian Mujahideen (IM), JeM, LeT, the Taliban and other online vernacular publications, including Pakistan’s Urdu newspaper ‘Al-Qalam’, also play their role in online radicalisation.

Indian jihadist groups use a variety of social media apps, best suited for their goals. Separatists and extremists in Kashmir, for coordination and communication, simply create WhatsApp groups and communicate the date, time and place for carrying out mass protests or stone pelting. Pakistan-based terror groups instead of online learning of Islam consider it mandatory that a Muslim radical follows a revered religious cleric. They select people manually to verify their background instead of online correspondence. Only after their induction, they communicate online with him. However, the IS, in the backdrop of recent defeats, unlike Kashmiri separatist groups and Pak-based jihadist mercenaries, runs its global movement entirely online through magazines and pamphlets. The al-Qaeda’s you tube channels ‘Ansar AQIS’ and ‘Al Firdaws’, once having over 25,000 subscriptions, are now banned. Its online magazines are Nawai Afghan and Statements are in Urdu, English, Arabic, Bangla and Tamil. Its blocked Twitter accounts, ‘Ansarul Islam’ and ‘Abna_ul_Islam_media’, had a following of over 1,300 while its Telegram accounts are believed to have over 500 members.

               Adoption of online platforms and technology

Initially, Kashmir based ‘Jaish-E-Mohammad’ (JeM) distributed audio cassettes of Masood Azhar’s speeches across India but it joined Internet platform during the year 2003–04 and started circulating downloadable materials through anonymous links and emails. Subsequently, it started its weekly e-newspaper, Al-Qalam, followed by a chat group on Yahoo. Importantly, following enhanced international pressure on Pak government after 26/11, to act against terrorist groups, JeM gradually shifted from mainstream online platform to social media sites, blogs and forums.   

 Indian Mujahideen’s splinter group ‘Ansar-ul-Tawhid’ the first officially affiliated terror group to the ISIS tried to maintain its presence on ‘Skype’, ‘WeChat’ and ‘JustPaste’. IS and its affiliates emerged as the most tech-savvy jihadist group. They took several measures to generate new accounts after repeated suspension of their accounts by governments.  An account called as ‘Baqiya Shoutout’ was one such measure. It stressed upon efforts to re-establish their network of followers through ‘reverse shout-out’ instead of opening a new account easily.

Pakistan-backed terrorist groups in India are increasingly becoming  technology savvy. For instance, LeT before carrying out terrorist attacks in 2008 in Mumbai, used Google Earth to understand the targeted locations.

IS members have been following strict security measures like keeping off their Global Positioning System (GPS) locations and use virtual private network (VPN),  to maintain anonymity. Earlier they were downloading Hola VPN or a similar programme from a mobile device or Web browser to select an Internet Protocol (IP) address for a country outside the US, and bypass email or phone verification.

Rise of radicalization in southern India

Southern states of India have witnessed a rise in  radicalization activities during the past 1-2 years. A substantial number of Diaspora in the Gulf countries belongs to Kerala and Tamil Nadu. Several Indian Muslims in Gulf countries have fallen prey to radicalization due to the ultra-conservative forms of Islam or their remittances have been misused to spread radical thoughts. One Shafi Armar@ Yusuf-al-Hindi from Karnataka emerged as the main online IS recruiter for India.  It is evident in the number of raids and arrests made in the region particularly after the Easter bomb attacks (April, 21, 2019) in Sri Lanka. The perpetrators were suspected to have been indoctrinated, radicalised and trained in the Tamil Nadu. Further probe revealed that the mastermind of the attacks, Zahran Hashim had travelled to India and maintained virtual links with radicalised youth in South India. Importantly, IS, while claiming responsibility for the attacks, issued statements not only in English and Arabic but also in South Indian languages viz. Malayalam and Tamil. It proved the existence of individuals fluent in South Indian languages in IS linked groups in the region. Similarly, AQIS’ affiliate in South India ‘Base Movement’ issued several threatening letters to media publications for insulting Islam.

IS is trying to recruit people from rural India by circulating the online material in vernacular languages. It is distributing material in numerous languages, including Malayalam and Tamil, which Al Qaeda were previously ignoring in favour of Urdu. IS-linked Keralite followers in their propaganda, cited radical pro-Hindutva, organisations such as the Rashtriya Swayam Sevak (RSS) and other right-wing Hindu organisations to motivate youth for joining the IS.  Similarly, Anti-Muslim incidents such as the demolition of the Babri Masjid in 1992 are still being used to fuel their propaganda. IS sympathisers also support the need to oppose Hindu Deities to gather support.

               Radicalization: Similarities/Distinctions in North and South

Despite few similarities, the radicalisation process in J&K is somewhat different from the states of Kerala, Karnataka, Tamil Nadu, Andhra Pradesh, Maharashtra, Telangana and Gujarat. Both the regions have witnessed a planned radicalization process through Internet/social media for propagating extremist ideologies and subverting the vulnerable youth. Both the areas faced the hard-line Salafi/Wahhabi ideology, propagated by the extremist Islamic clerics and madrasas indulged in manipulating the religion of Islam. Hence, in this context it can be aptly claimed that terror activities in India have cooperation of elements from both the regions, despite their distinct means and objectives. Elements from both regions to an extent sympathise to the cause of bringing India under the Sharia Law. Hence, the possibility of cooperation in such elements cannot be ruled out particularly in facilitation of logistics, ammunitions and other requisite equipment.

It is pertinent to note that while radicalisation in Jammu and Kashmir is directly linked to the proxy-war, sponsored by the Pakistan state, the growth of radicalisation in West and South India owes its roots to the spread of IS ideology, promotion of Sharia rule and establishment of Caliphate. Precisely for this reason, while radicalised local Kashmiris unite to join Pakistan-backed terror groups to fight for ‘Azadi’ or other fabricated local issues, the locals in south rather remain isolated cases.

               Impact of Radicalisation

The impact of global jihad on radicalization is quite visible in West and South India. Majority of the radicalised people, arrested in West and South India, were in fact proceeding to to join IS in Syria and Iraq. It included the group of 22 people from a Kerala’s family, who travelled (June 2016) to Afghanistan via Iran. There obvious motivation was to migrate from Dar-ul-Harb (house of war) to Dar-ul-Islam (house of peace/Islam/Deen).

While comparing the ground impact of radicalization in terms of number of cases of local militants in J&K as well as IS sympathisers in West and South India, it becomes clear that radicalisation was spread more in J&K, owing to Pak-sponsored logistical and financial support. Significantly, despite hosting the third largest Muslim population, the number of Indian sympathisers to terror outfits, particularly in West and South India is very small as compared to the western countries. Main reasons attributed to this, include – religious and cultural pluralism; traditionally practice of moderate Islamic belief-systems; progressive educational and economic standards; and equal socio-economic and political safeguards for the Indian Muslims in the Indian Constitution.

               Challenges Ahead

Apart from varied challenges, including Pak-sponsored anti-India activities, regional, local and political challenges, media wings of global jihadi outfits continue to pose further challenges to Indian security agencies. While IS through its media wing, ‘Al Isabah’ has been circulating (through social media sites) Abu Bakr al Baghdadi’s speeches and videos after translating them into Urdu, Hindi, and Tamil for Indian youth (Rajkumar 2015), AQIS too have been using its media wing for the very purpose through its offshoots in India.  Some of the challenges, inter alia include –

Islam/Cleric Factor Clerics continue to play a crucial role in influencing the minds of Muslim youth by exploiting the religion of Islam. A majority of 127 arrested IS sympathizers from across India recently revealed that they were following speeches of controversial Indian preacher Zakir Naik of Islamic Research Foundation (IRF). Zakir has taken refuge in Malaysia because of warrants against him by the National Investigation Agency (NIA) for alleged money laundering and inciting extremism through hate speeches. A Perpetrator of Dhaka bomb blasts in July 2016 that killed several people confessed that he was influenced by Naik’s messages. Earlier, IRF had organised ‘peace conferences’ in Mumbai between 2007 and 2011 in which Zakir attempted to convert people and incite terrorist acts. Thus, clerics and preachers who sbverts the Muslim minds towards extremism, remain a challenge for India.

Propaganda Machinery – The online uploading of young militant photographs, flaunting Kalashnikov rifles became the popular means of declaration of youth intent against government forces. Their narrative of “us versus them” narrative is clearly communicated, creating groundswell of support for terrorism.In its second edition (March 2020) of its propaganda magazine ‘Sawt al-Hind’ (Voice of Hind/India) IS, citing an old propaganda message from a deceased (2018) Kashmiri IS terrorist, Abu Hamza al-Kashmiri @ Abdul Rehman, called upon Taliban apostates and fighters to defect to IS.  In the first edition (Feb. 2020) the magazine, eulogized Huzaifa al-Bakistani (killed in 2019), asking Indian Muslims to rally to IS in the name of Islam in the aftermath of the 2020 Delhi riots. Meanwhile, a Muslim couple arrested by Delhi Police for inciting anti-CAA (Citizenship Amendment) Bill protests, were found very active on social media. They would call Indian Muslims to unite against the Indian government against the CAA legislation. During 2017 Kashmir unrest, National Investigation Agency (NIA) identified 79 WhatsApp groups (with administrators based in Pakistan), having 6,386 phone numbers, to crowd source boys for stone pelting. Of these, around 1,000 numbers were found active in Pakistan and Gulf nations and the remaining 5,386 numbers were found active in Kashmir Valley.

Deep fakes/Fake news – Another challenge for India is spread of misinformation and disinformation through deep fakes by Pakistan. Usage of deepfakes, in manipulating the speeches of local political leaders to spread hate among the youth and society was done to large extent.

India’s Counter Measures

To prevent youth straying towards extremism, India’s Ministry of Home Affairs has established a Counter-Terrorism and Counter-Radicalisation Division (CT-CR) to help states, security agencies and communities.

Various states, including Kerala, Maharashtra and Telangana have set up their own de-radicalisation programmes.  While in Maharashtra family and community plays an important role, in Kerala clerics cleanse the poisoned  minds of youth with a new narrative. A holistic programme for community outreach including healthcare, clergies and financial stability is being employed by the Indian armed forces. An operation in Kerala named Kerala state police’ ‘Operation Pigeon’ succeeded in thwarting radicalization of 350 youths to the propaganda of organizations such as Islamic State, Indian Mujahideen (IM) and Lashkar-e-Taiba (LeT) via social media monitoring. In Telangana, outreach programs have been developed by local officers like Rema Rajeshwari to fight the menace of fake news in around 400 villages of the state.

In Kashmir the government resorts to internet curfews to control the e-jihad. While state-owned BNSL network, used by the administration and security forces, remains operational 3G and 4G networks and social media apps remain suspended during internet curfews.

Prognosis

India certainly needs a strong national counter- Radicalisation policy which would factor in a range of factors than jobs, poverty or education because radicalization in fact has affected even well educated, rich and prosperous families. Instead of focusing on IS returnees from abroad, the policy must take care of those who never travelled abroad but still remain a potential threat due to their vulnerability to radicalization.

Of course, India would be better served if deep fakes/fake news and online propaganda is effectively countered digitally as well as through social awakening measures and on ground action by the government agencies. It is imperative that the major stakeholders i.e. government, educational institutions, civil society organisations, media and intellectuals play a pro-active role in pushing their narrative amongst youth and society. The focus should apparently be on prevention rather than controlling the radicalisation narrative of the vested interests.

Continue Reading

Intelligence

Is Deterrence in Cyberspace Possible?

Published

on

Soon after the Internet was founded, half of the world’s population (16 million) in 1996 had been connected to Internet data traffic. Gradually, the Internet began to grow and with more users, it contributed to the 4 trillion global economies in 2016 (Nye, 2016). Today, high-speed Internet, cutting-edge technologies and gadgets, and increasing cross-border Internet data traffic are considered an element of globalization. Deterrence seems traditional and obsolete strategy, but the developed countries rely on cyberspace domains to remain in the global digitization. No matter how advanced they are, there still exist vulnerabilities. There are modern problems in the modern world. Such reliance on the Internet also threatens to blow up the dynamics of international insecurity. To understand and explore the topic it is a must for one to understand what cyberspace and deterrence are? According to Oxford dictionary;

 “Cyberspace is the internet considered as an imaginary space without a physical location in which communication over computer networks takes place (OXFORD University Press)”

For readers to understand the term ‘deterrence’; Collins dictionary has best explained it as;

“Deterrence is the prevention of something, especially war or crime, by having something such as weapons or punishment to use as a threat e.g. Nuclear Weapons (Deterrence Definition and Meaning | Collins English Dictionary).

The purpose of referring to the definition is to make it easy to discern and distinguish between deterrence in International Relations (IR) and International Cyber Security (ICS). Deterrence in cyberspace is different and difficult than that of during the Cold War. The topic of deterrence was important during Cold Wat for both politicians and academia. The context in both dimensions (IR and ICS) is similar and aims to prevent from happening something. Cyberspace deterrence refers to preventing crime and I completely agree with the fact that deterrence is possible in Cyberspace. Fischer (2019) quotes the study of (Quinlan, 2004) that there is no state that can be undeterrable.

To begin with, cyber threats are looming in different sectors inclusive of espionage, disruption of the democratic process and sabotaging the political arena, and war. Whereas international law is still unclear about these sectors as to which category they fall in. I would validate my affirmation (that deterrence is possible in Cyberspace) with the given network attacks listed by Pentagon (Fung, 2013). Millions of cyber-attacks are reported on a daily basis. The Pentagon reported 10 million cyberspace intrusions, most of which are disruptive, costly, and annoying. The level of severity rises to such a critical level that it is considered a threat to national security, so professional strategic assistance is needed to deal with it[1]. The past events show a perpetual threat that has the ability to interrupt societies, economies, and government functioning.

The cyberspace attacks were administered and portrayal of deterrence had been publicized as follows (Fung, 2013);

  1. The internet service was in a continuous disruption for several weeks after a dispute with Russia in 2007.
  2. Georgian defense communications were interrupted in 2008 after the Russian invasion of Georgia.   
  3. More than 1000 centrifuges in Iran were destroyed via the STUXNET virus in 2010. The attacks were attributed to Israel and the United States of America.
  4. In response to STUXNET virus attacks, Iran also launched a retaliatory attack on U.S financial institutions in 2012 and 2013.
  5. Similarly in 2012, some 30,000 computers had been destroyed with a virus called SHAMOON in Saudi Aramco Corporation. Iran was held responsible for these attacks.
  6. North Korea was accused of penetrating South Korean data and machines in 2014, thus interrupting their networks in 2014.
  7. A hybrid war was reported between Russia and Ukraine in 2015 that left Ukraine without electricity for almost six hours.
  8. Most critical scandal, which is still in the limelight call WikiLeaks released distressing and humiliating emails by Russian Intelligence at the time of the U.S presidential campaigns in 2016.

While such incidents may be considered a failure of deterrence, this does not mean that deterrence is impossible. Every system has some flaws that are exposed at some point. At this point, in some cases a relatively low level of deterrence was used to threaten national security, however, the attacks were quite minor in fulfilling the theme affecting national security. Nye (2016:51) in his study talks about the audience whose attribution could facilitate deterrence. (I). intelligence agencies should make sure highest safeguarding against escalation by third parties, and governments can also be certain and count on intelligence agencies’ sources. (II). the deterring party should not be taken easy, as I stated (above) about the lingering loopholes and flaws in the systems, hence, governments shall not perceive the intelligence forsaken.  (III). lastly, it is a political matter whether international and domestic audiences need to be persuaded or not, and what chunk of information should be disclosed.

The mechanisms which are used and helpful against cyberspace adversary actions are as follows (Fischer, 2019);

  1. Deterrence by denial means, the actions by the adversary are denied that they failed to succeed in their goals and objectives. It is more like retaliating a cyberattack.
  2. Threat of punishment offers severe outcomes in form of penalties and inflicting high costs on the attacker that would outweigh the anticipated benefits if the attack takes place.
  3. Deterrence by Entanglement has the features and works on a principle of shared, interconnected, and dependent vulnerabilities. The purpose of entanglement is to embolden and reassure the behavior as a responsible state with mutual interests.
  4. Normative taboos function with strong values and norms, wherein the reputation of an aggressor is at stake besides having a soft image in the eyes of the international community (this phenomenon includes rational factors because hard power is used against the weaker state). The deterrence of the international system works even without having any credible resilience.

Apparently, the mechanisms of deterrence are also effective in cyber realms. These realms are self-explaining the comprehensive understanding and the possibility of deterrence in cyberspace. The four mechanisms (denial, punishment, entanglement, and normative taboos) are also feasible to apply deterrence in the cyber world. Factually, of many security strategies, cyber deterrence by using four domains could be a versatile possibility. Conclusively, as far as the world is advancing in technological innovations, cyberspace intrusions would not stop alike the topic of deterrence in the digital world.


[1] An updated list of cyberspace intrusions from 2003 till 2021 is available at (Center for Strategic and International Studies, 2021).

Continue Reading

Publications

Latest

Reports17 mins ago

Labour market recovery still ‘slow and uncertain’

As the COVID-19 pandemic grinds on and global labour markets continue to struggle, the latest International Labour Organization (ILO) report,...

South Asia2 hours ago

India’s open invitation to a nuclear Armageddon

Army chief General Manoj Mukund Naravane said that “India was not averse to the possible demilitarisation of the Siachen glacier...

East Asia4 hours ago

The role of CPC in supporting leadership schools in democratic countries

The Department of International Communication is officially under the Central Committee of the Communist Party of China “CPC”, known by...

Development8 hours ago

Guterres Calls on Private Sector to Help Developing Countries with Post-Pandemic Recovery

In a special address at the virtual World Economic Forum Davos Agenda 2022 on Monday, United Nations Secretary-General António Guterres...

NarendraModi NarendraModi
Development10 hours ago

Modi Urges All Countries to Embrace Sustainable Lifestyles

Prime Minister Narendra Modi of India used his address to the Davos Agenda 2022 to call on all countries to...

Finance14 hours ago

China: $1.9 Trillion Boost and 88M Jobs by 2030 Possible with Nature-Positive Solutions

Nearly $9 trillion, two-thirds of China’s total Gross Domestic Product (GDP), is at risk of disruption from nature loss. Making...

Health & Wellness16 hours ago

UN-backed COVAX mechanism delivers its 1 billionth COVID-19 vaccine dose

With a 1.1 million jab delivery in Rwanda this weekend, the World Health Organization’s multilateral initiative to provide equal access...

Trending