Authors: Hannes Grassegger and Mikael Krogerus
Aegean theater of the Antique Greece was the place of astonishing revelations and intellectual excellence – a remarkable density and proximity, not surpassed up to our age. All we know about science, philosophy, sports, arts, culture and entertainment, stars and earth has been postulated, explored and examined then and there.
Simply, it was a time and place of triumph of human consciousness, pure reasoning and sparkling thought. However, neither Euclid, Anaximander, Heraclites, Hippocrates (both of Chios, and of Cos), Socrates, Archimedes, Ptolemy, Democritus, Plato, Pythagoras, Diogenes, Aristotle, Empedocles, Conon, Eratosthenes nor any of dozens of other brilliant ancient Greek minds did ever refer by a word, by a single sentence to something which was their everyday life, something they saw literally on every corner along their entire lives. It was an immoral, unjust, notoriously brutal and oppressive slavery system that powered the Antique state. (Slaves have not been even attributed as humans, but rather as the ‘phonic tools/tools able to speak’.) This myopia, this absence of critical reference on the obvious and omnipresent is a historic message – highly disturbing, self-telling and quite a warning.” – notes prof. Anis H. Bajrektarevic in his luminary book of 2013, ‘Is there like after Facebook? – Geopolitics of Technology’.
Indeed, why do we constantly ignore massive and sustain harvesting of our personal data from the social networks, medical records, pay-cards, internet and smart phones as well as its commercialization and monetization for dubious ends and disturbing futures.
Professor Bajrektarevic predicts and warns: “If humans hardly ever question fetishisation of their own McFB way of life, or oppose the (self-) trivialization, why then is the subsequent brutalization a surprise to them?”
Thus, should we be really surprise with the Brexit vote, with the results of the US elections, and with the forcoming massive wins of the right-wing parties all over Europe? Putin is behind it !! – how easy, and how misleading a self-denial.
Here is a story based on facts, if we are only interested to really grasp the Matrix world. The Iron Cage we constructed ourselves.
On November 9 at around 8.30 AM., Michal Kosinski woke up in the Hotel Sunnehus in Zurich. The 34-year-old researcher had come to give a lecture at the Swiss Federal Institute of Technology (ETH) about the dangers of Big Data and the digital revolution. Kosinski gives regular lectures on this topic all over the world. He is a leading expert in psychometrics, a data-driven sub-branch of psychology. When he turned on the TV that morning, he saw that the bombshell had exploded: contrary to forecasts by all leading statisticians, Donald J. Trump had been elected president of the United States.
For a long time, Kosinski watched the Trump victory celebrations and the results coming in from each state. He had a hunch that the outcome of the election might have something to do with his research. Finally, he took a deep breath and turned off the TV.
On the same day, a then little-known British company based in London sent out a press release: “We are thrilled that our revolutionary approach to data-driven communication has played such an integral part in President-elect Trump’s extraordinary win,” Alexander James Ashburner Nix was quoted as saying. Nix is British, 41 years old, and CEO of Cambridge Analytica. He is always immaculately turned out in tailor-made suits and designer glasses, with his wavy blonde hair combed back from his forehead. His company wasn’t just integral to Trump’s online campaign, but to the UK’s Brexit campaign as well.
Of these three players—reflective Kosinski, carefully groomed Nix and grinning Trump—one of them enabled the digital revolution, one of them executed it and one of them benefited from it.
How dangerous is big data?
Anyone who has not spent the last five years living on another planet will be familiar with the term Big Data. Big Data means, in essence, that everything we do, both on and offline, leaves digital traces. Every purchase we make with our cards, every search we type into Google, every movement we make when our mobile phone is in our pocket, every “like” is stored. Especially every “like.” For a long time, it was not entirely clear what use this data could have—except, perhaps, that we might find ads for high blood pressure remedies just after we’ve Googled “reduce blood pressure.”
On November 9, it became clear that maybe much more is possible. The company behind Trump’s online campaign—the same company that had worked for Leave.EU in the very early stages of its “Brexit” campaign—was a Big Data company: Cambridge Analytica.
To understand the outcome of the election—and how political communication might work in the future—we need to begin with a strange incident at Cambridge University in 2014, at Kosinski’s Psychometrics Center.
Psychometrics, sometimes also called psychographics, focuses on measuring psychological traits, such as personality. In the 1980s, two teams of psychologists developed a model that sought to assess human beings based on five personality traits, known as the “Big Five.” These are: openness (how open you are to new experiences?), conscientiousness (how much of a perfectionist are you?), extroversion (how sociable are you?), agreeableness (how considerate and cooperative you are?) and neuroticism (are you easily upset?). Based on these dimensions—they are also known as OCEAN, an acronym for openness, conscientiousness, extroversion, agreeableness, neuroticism—we can make a relatively accurate assessment of the kind of person in front of us. This includes their needs and fears, and how they are likely to behave. The “Big Five” has become the standard technique of psychometrics. But for a long time, the problem with this approach was data collection, because it involved filling out a complicated, highly personal questionnaire. Then came the Internet. And Facebook. And Kosinski.
Michal Kosinski was a student in Warsaw when his life took a new direction in 2008. He was accepted by Cambridge University to do his PhD at the Psychometrics Centre, one of the oldest institutions of this kind worldwide. Kosinski joined fellow student David Stillwell (now a lecturer at Judge Business School at the University of Cambridge) about a year after Stillwell had launched a little Facebook application in the days when the platform had not yet become the behemoth it is today. Their MyPersonality app enabled users to fill out different psychometric questionnaires, including a handful of psychological questions from the Big Five personality questionnaire (“I panic easily,” “I contradict others”). Based on the evaluation, users received a “personality profile”—individual Big Five values—and could opt-in to share their Facebook profile data with the researchers.
Kosinski had expected a few dozen college friends to fill in the questionnaire, but before long, hundreds, thousands, then millions of people had revealed their innermost convictions. Suddenly, the two doctoral candidates owned the largest dataset combining psychometric scores with Facebook profiles ever to be collected.
The approach that Kosinski and his colleagues developed over the next few years was actually quite simple. First, they provided test subjects with a questionnaire in the form of an online quiz. From their responses, the psychologists calculated the personal Big Five values of respondents. Kosinski’s team then compared the results with all sorts of other online data from the subjects: what they “liked,” shared or posted on Facebook, or what gender, age, place of residence they specified, for example. This enabled the researchers to connect the dots and make correlations.
Remarkably reliable deductions could be drawn from simple online actions. For example, men who “liked” the cosmetics brand MAC were slightly more likely to be gay; one of the best indicators for heterosexuality was “liking” Wu-Tang Clan. Followers of Lady Gaga were most probably extroverts, while those who “liked” philosophy tended to be introverts. While each piece of such information is too weak to produce a reliable prediction, when tens, hundreds, or thousands of individual data points are combined, the resulting predictions become really accurate.
Kosinski and his team tirelessly refined their models. In 2012, Kosinski proved that on the basis of an average of 68 Facebook “likes” by a user, it was possible to predict their skin color (with 95 percent accuracy), their sexual orientation (88 percent accuracy), and their affiliation to the Democratic or Republican party (85 percent). But it didn’t stop there. Intelligence, religious affiliation, as well as alcohol, cigarette and drug use, could all be determined. From the data it was even possible to deduce whether someone’s parents were divorced.
The strength of their modeling was illustrated by how well it could predict a subject’s answers. Kosinski continued to work on the models incessantly: before long, he was able to evaluate a person better than the average work colleague, merely on the basis of ten Facebook “likes.” Seventy “likes” were enough to outdo what a person’s friends knew, 150 what their parents knew, and 300 “likes” what their partner knew. More “likes” could even surpass what a person thought they knew about themselves. On the day that Kosinski published these findings, he received two phone calls. The threat of a lawsuit and a job offer. Both from Facebook.
Only weeks later Facebook “likes” became private by default. Before that, the default setting was that anyone on the internet could see your “likes.” But this was no obstacle to data collectors: while Kosinski always asked for the consent of Facebook users, many apps and online quizzes today require access to private data as a precondition for taking personality tests. (Anybody who wants to evaluate themselves based on their Facebook “likes” can do so on Kosinski’s website, and then compare their results to those of a classic Ocean questionnaire, like that of the Cambridge Psychometrics Center.)
But it was not just about “likes” or even Facebook: Kosinski and his team could now ascribe Big Five values based purely on how many profile pictures a person has on Facebook, or how many contacts they have (a good indicator of extraversion). But we also reveal something about ourselves even when we’re not online. For example, the motion sensor on our phone reveals how quickly we move and how far we travel (this correlates with emotional instability). Our smartphone, Kosinski concluded, is a vast psychological questionnaire that we are constantly filling out, both consciously and unconsciously.
Above all, however—and this is key—it also works in reverse: not only can psychological profiles be created from your data, but your data can also be used the other way round to search for specific profiles: all anxious fathers, all angry introverts, for example—or maybe even all undecided Democrats? Essentially, what Kosinski had invented was sort of a people search engine. He started to recognize the potential—but also the inherent danger—of his work.
To him, the internet was a gift from heaven. What he really wanted was to give something back, to share. Data can be copied, so why shouldn’t everyone benefit from it? It was the spirit of Millenials, entire new generation, the beginning of a new era that transcended the limitations of the physical world. But what would happen, wondered Kosinski, if someone abused his people search engine to manipulate people? He began to add warnings to most of his scientific work. His approach, he warned, “could pose a threat to an individual’s well-being, freedom, or even life.” But no one seemed to grasp what he meant.
Around this time, in early 2014, Kosinski was approached by a young assistant professor in the psychology department called Aleksandr Kogan. He said he was inquiring on behalf of a company that was interested in Kosinski’s method, and wanted to access the MyPersonality database. Kogan wasn’t at liberty to reveal for what purpose; he was bound to secrecy.
At first, Kosinski and his team considered this offer, as it would mean a great deal of money for the institute, but then he hesitated. Finally, Kosinski remembers, Kogan revealed the name of the company: SCL, or Strategic Communication Laboratories. Kosinski Googled the company: “[We are] the premier election management agency,” says the company’s website. SCL provides marketing based on psychological modeling. One of its core focuses: Influencing elections. Influencing elections? Perturbed, Kosinski clicked through the pages. What kind of company was this? And what were these people planning?
What Kosinski did not know at the time: SCL is the parent of a group of companies. Who exactly owns SCL and its diverse branches is unclear, thanks to a convoluted corporate structure, the type seen in the UK Companies House, the Panama Papers, and the Delaware company registry. Some of the SCL offshoots have been involved in elections from Ukraine to Nigeria, helped the Nepalese monarch against the Maoists, whereas others have developed methods to influence Eastern Euripean and Afghan citizens for NATO. And, in 2013, SCL spun off a new company to participate in US elections: Cambridge Analytica.
Kosinski knew nothing about all this, but he had a bad feeling. “The whole thing started to stink,” he recalls. On further investigation, he discovered that Aleksandr Kogan had secretly registered a company doing business with SCL. According to a December 2015 report in the Guardian and to internal company documents given to Das Magazin, it emerges that SCL learned about Kosinski’s method from Kogan.
Kosinski came to suspect that Kogan’s company might have reproduced the Facebook “Likes”-based Big Five measurement tool in order to sell it to this election-influencing firm. He immediately broke off contact with Kogan and informed the director of the institute, sparking a complicated conflict within the university. The institute was worried about its reputation. Aleksandr Kogan then moved to Singapore, married, and changed his name to Dr. Spectre. Michal Kosinski finished his PhD, got a job offer from Stanford and moved to the US.
All was quiet for about a year. Then, in November 2015, the more radical of the two Brexit campaigns, “Leave.EU,” supported by Nigel Farage, announced that it had commissioned a Big Data company to support its online campaign: Cambridge Analytica. The company’s core strength: innovative political marketing—microtargeting—by measuring people’s personality from their digital footprints, based on the OCEAN model.
Now Kosinski received emails asking what he had to do with it—the words Cambridge, personality, and analytics immediately made many people think of Kosinski. It was the first time he had heard of the company, which borrowed its name, it said, from its first employees, researchers from the university. Horrified, he looked at the website. Was his methodology being used on a grand scale for political purposes?
After the Brexit result, friends and acquaintances wrote to him: Just look at what you’ve done. Everywhere he went, Kosinski had to explain that he had nothing to do with this company. (It remains unclear how deeply Cambridge Analytica was involved in the Brexit campaign. Cambridge Analytica would not discuss such questions.)
For a few months, things are relatively quiet. Then, on September 19, 2016, just over a month before the US elections, the guitar riffs of Creedence Clearwater Revival’s “Bad Moon Rising” fill the dark-blue hall of New York’s Grand Hyatt hotel. The Concordia Summit is a kind of World Economic Forum in miniature. Decision-makers from all over the world have been invited, among them Swiss President Johann Schneider-Ammann. “Please welcome to the stage Alexander Nix, chief executive officer of Cambridge Analytica,” a smooth female voice announces. A slim man in a dark suit walks onto the stage. A hush falls. Many in attendance know that this is Trump’s new digital strategy man. (A video of the presentation was posted on YouTube.)
A few weeks earlier, Trump had tweeted, somewhat cryptically, “Soon you’ll be calling me Mr. Brexit.” Political observers had indeed noticed some striking similarities between Trump’s agenda and that of the right-wing Brexit movement. But few had noticed the connection with Trump’s recent hiring of a marketing company named Cambridge Analytica.
Up to this point, Trump’s digital campaign had consisted of more or less one person: Brad Parscale, a marketing entrepreneur and failed start-up founder who created a rudimentary website for Trump for $1,500. The 70-year-old Trump is not digitally savvy—there isn’t even a computer on his office desk. Trump doesn’t do emails, his personal assistant once revealed. She herself talked him into having a smartphone, from which he now tweets incessantly.
Hillary Clinton, on the other hand, relied heavily on the legacy of the first “social-media president,” Barack Obama. She had the address lists of the Democratic Party, worked with cutting-edge big data analysts from BlueLabs and received support from Google and DreamWorks. When it was announced in June 2016 that Trump had hired Cambridge Analytica, the establishment in Washington just turned up their noses. Foreign dudes in tailor-made suits who don’t understand the country and its people? Seriously?
“It is my privilege to speak to you today about the power of Big Data and psychographics in the electoral process.” The logo of Cambridge Analytica— a brain composed of network nodes, like a map, appears behind Alexander Nix. “Only 18 months ago, Senator Cruz was one of the less popular candidates,” explains the blonde man in a cut-glass British accent, which puts Americans on edge the same way that a standard German accent can unsettle Swiss people. “Less than 40 percent of the population had heard of him,” another slide says. Cambridge Analytica had become involved in the US election campaign almost two years earlier, initially as a consultant for Republicans Ben Carson and Ted Cruz. Cruz—and later Trump—was funded primarily by the secretive US software billionaire Robert Mercer who, along with his daughter Rebekah, is reported to be the largest investor in Cambridge Analytica.
“So how did he do this?” Up to now, explains Nix, election campaigns have been organized based on demographic concepts. “A really ridiculous idea. The idea that all women should receive the same message because of their gender—or all African Americans because of their race.” What Nix meant is that while other campaigners so far have relied on demographics, Cambridge Analytica was using psychometrics.
Though this might be true, Cambridge Analytica’s role within Cruz’s campaign isn’t undisputed. In December 2015 the Cruz team credited their rising success to psychological use of data and analytics. In Advertising Age, a political client said the embedded Cambridge staff was “like an extra wheel,” but found their core product, Cambridge’s voter data modeling, still “excellent.” The campaign would pay the company at least $5.8 million to help identify voters in the Iowa caucuses, which Cruz won, before dropping out of the race in May.
Nix clicks to the next slide: five different faces, each face corresponding to a personality profile. It is the Big Five or OCEAN Model. “At Cambridge,” he said, “we were able to form a model to predict the personality of every single adult in the United States of America.” The hall is captivated. According to Nix, the success of Cambridge Analytica’s marketing is based on a combination of three elements: behavioral science using the OCEAN Model, Big Data analysis, and ad targeting. Ad targeting is personalized advertising, aligned as accurately as possible to the personality of an individual consumer.
Nix candidly explains how his company does this. First, Cambridge Analytica buys personal data from a range of different sources, like land registries, automotive data, shopping data, bonus cards, club memberships, what magazines you read, what churches you attend. Nix displays the logos of globally active data brokers like Acxiom and Experian—in the US, almost all personal data is for sale. For example, if you want to know where Jewish women live, you can simply buy this information, phone numbers included.
Now Cambridge Analytica aggregates this data with the electoral rolls of the Republican party and online data and calculates a Big Five personality profile. Digital footprints suddenly become real people with fears, needs, interests, and residential addresses.
The methodology looks quite similar to the one that Michal Kosinski once developed. Cambridge Analytica also uses, Nix told us, “surveys on social media” and Facebook data. And the company does exactly what Kosinski warned of: “We have profiled the personality of every adult in the United States of America—220 million people,” Nix boasts.
He opens the screenshot. “This is a data dashboard that we prepared for the Cruz campaign.” A digital control center appears. On the left are diagrams; on the right, a map of Iowa, where Cruz won a surprisingly large number of votes in the primary. And on the map, there are hundreds of thousands of small red and blue dots. Nix narrows down the criteria: “Republicans”—the blue dots disappear; “not yet convinced”—more dots disappear; “male”, and so on. Finally, only one name remains, including age, address, interests, personality and political inclination. How does Cambridge Analytica now target this person with an appropriate political message?
Nix shows how psychographically categorized voters can be differently addressed, based on the example of gun rights, the 2nd Amendment: “For a highly neurotic and conscientious audience the threat of a burglary—and the insurance policy of a gun.” An image on the left shows the hand of an intruder smashing a window. The right side shows a man and a child standing in a field at sunset, both holding guns, clearly shooting ducks: “Conversely, for a closed and agreeable audience. People who care about tradition, and habits, and family.”
How to keep Clinton voters away from the ballot box
Trump’s striking inconsistencies, his much-criticized fickleness, and the resulting array of contradictory messages, suddenly turned out to be his great asset: a different message for every voter. The notion that Trump acted like a perfectly opportunistic algorithm following audience reactions is something the mathematician Cathy O’Neil observed in August 2016.
“Pretty much every message that Trump put out was data-driven,” Alexander Nix remembers. On the day of the third presidential debate between Trump and Clinton, Trump’s team tested 175,000 different ad variations for his arguments, in order to find the right versions above all via Facebook. The messages differed for the most part only in microscopic details, in order to target the recipients in the optimal psychological way: different headings, colors, captions, with a photo or video. This fine-tuning reaches all the way down to the smallest groups, Nix explained in an interview with us. “We can address villages or apartment blocks in a targeted way. Even individuals.”
In the Miami district of Little Haiti, for instance, Trump’s campaign provided inhabitants with news about the failure of the Clinton Foundation following the earthquake in Haiti, in order to keep them from voting for Hillary Clinton. This was one of the goals: to keep potential Clinton voters (which include wavering left-wingers, African-Americans, and young women) away from the ballot box, to “suppress” their vote, as one senior campaign official told Bloomberg in the weeks before the election. These “dark posts”—sponsored news-feed-style ads in Facebook timelines that can only be seen by users with specific profiles—included videos aimed at African-Americans in which Hillary Clinton refers to black men as predators, for example.
Nix finishes his lecture at the Concordia Summit by stating that traditional blanket advertising is dead. “My children will certainly never, ever understand this concept of mass communication.” And before leaving the stage, he announced that since Cruz had left the race, the company was helping one of the remaining presidential candidates.
Just how precisely the American population was being targeted by Trump’s digital troops at that moment was not visible, because they attacked less on mainstream TV and more with personalized messages on social media or digital TV. And while the Clinton team thought it was in the lead, based on demographic projections, Bloomberg journalist Sasha Issenberg was surprised to note on a visit to San Antonio—where Trump’s digital campaign was based—that a “second headquarters” was being created. The embedded Cambridge Analytica team, apparently only a dozen people, received $100,000 from Trump in July, $250,000 in August, and $5 million in September. According to Nix, the company earned over $15 million overall. (The company is incorporated in the US, where laws regarding the release of personal data are more lax than in European Union countries. Whereas European privacy laws require a person to “opt in” to a release of data, those in the US permit data to be released unless a user “opts out.”)
The measures were radical: From July 2016, Trump’s canvassers were provided with an app with which they could identify the political views and personality types of the inhabitants of a house. It was the same app provider used by Brexit campaigners. Trump’s people only rang at the doors of houses that the app rated as receptive to his messages. The canvassers came prepared with guidelines for conversations tailored to the personality type of the resident. In turn, the canvassers fed the reactions into the app, and the new data flowed back to the dashboards of the Trump campaign.
Again, this is nothing new. The Democrats did similar things, but there is no evidence that they relied on psychometric profiling. Cambridge Analytica, however, divided the US population into 32 personality types, and focused on just 17 states. And just as Kosinski had established that men who like MAC cosmetics are slightly more likely to be gay, the company discovered that a preference for cars made in the US was a great indication of a potential Trump voter. Among other things, these findings now showed Trump which messages worked best and where. The decision to focus on Michigan and Wisconsin in the final weeks of the campaign was made on the basis of data analysis. The candidate became the instrument for implementing a big data model.
But to what extent did psychometric methods influence the outcome of the election? When asked, Cambridge Analytica was unwilling to provide any proof of the effectiveness of its campaign. And it is quite possible that the question is impossible to answer.
And yet there are clues: There is the fact of the surprising rise of Ted Cruz during the primaries. Also there was an increased number of voters in rural areas. There was the decline in the number of African-American early votes. The fact that Trump spent so little money may also be explained by the effectiveness of personality-based advertising. As does the fact that he invested far more in digital than TV campaigning compared to Hillary Clinton. Facebook proved to be the ultimate weapon and the best election campaigner, as Nix explained, and as comments by several core Trump campaigners demonstrate.
Many voices have claimed that the statisticians lost the election because their predictions were so off the mark. But what if statisticians in fact helped win the election—but only those who were using the new method? It is an irony of history that Trump, who often grumbled about scientific research, used a highly scientific approach in his campaign.
Another big winner is Cambridge Analytica. Its board member Steve Bannon, former executive chair of the right-wing online newspaper Breitbart News, has been appointed as Donald Trump’s senior counselor and chief strategist. Whilst Cambridge Analytica is not willing to comment on alleged ongoing talks with UK Prime Minister Theresa May, Alexander Nix claims that he is building up his client base worldwide, and that he has received inquiries from Switzerland, Germany, and Australia. His company is currently touring European conferences showcasing their success in the United States. This year three core countries of the EU are facing elections with resurgent populist parties: France, Holland and Germany. The electoral successes come at an opportune time, as the company is readying for a push into commercial advertising.
Kosinski has observed all of this from his office at Stanford. Following the US election, the university is in turmoil. Kosinski is responding to developments with the sharpest weapon available to a researcher: a scientific analysis. Together with his research colleague Sandra Matz, he has conducted a series of tests, which will soon be published. The initial results are alarming: The study shows the effectiveness of personality targeting by showing that marketers can attract up to 63 percent more clicks and up to 1,400 more conversions in real-life advertising campaigns on Facebook when matching products and marketing messages to consumers’ personality characteristics. They further demonstrate the scalability of personality targeting by showing that the majority of Facebook Pages promoting products or brands are affected by personality and that large numbers of consumers can be accurately targeted based on a single Facebook Page.
In a statement after the German publication of this article, a Cambridge Analytica spokesperson said, “Cambridge Analytica does not use data from Facebook. It has had no dealings with Dr. Michal Kosinski. It does not subcontract research. It does not use the same methodology. Psychographics was hardly used at all. Cambridge Analytica did not engage in efforts to discourage any Americans from casting their vote in the presidential election. Its efforts were solely directed towards increasing the number of voters in the election.”
The world has been turned upside down. Great Britain is leaving the EU, Donald Trump is president of the United States of America. And in Stanford, Kosinski, who wanted to warn against the danger of using psychological targeting in a political setting, is once again receiving accusatory emails. “No,” says Kosinski, quietly and shaking his head. “This is not my fault. I did not build the bomb. I only showed that it exists.”
Hannes Grassegger and Mikael Krogerus are investigative journalists attached to the Swiss-based Das Magazin specialized journal. The original text appeared in the late December edition under the title: “I only showed that the bomb exists” (Ich habe nur gezeigt, dass es die Bombe gibt). This, English translation, is based on the subsequent January version, first published by the Motherboard magazine (titled: The Data That Turned the World Upside
An Underdeveloped Discipline: Open-Source Intelligence and How It Can Better Assist the U.S. Intelligence Community
Open-Source Intelligence (OSINT) is defined by noted intelligence specialists Mark Lowenthal and Robert M. Clark as being, “information that is publicly available to anyone through legal means, including request, observation, or purchase, that is subsequently acquired, vetted, and analyzed in order to fulfill an intelligence requirement”. The U.S. Naval War College further defines OSINT as coming from, “print or electronic form including radio, television, newspapers, journals, the internet, and videos, graphics, and drawings”. Basically, OSINT is the collection of information from a variety of public sources, including social media profiles and accounts, television broadcasts, and internet searches.
Historically, OSINT has been utilized by the U.S. since the 1940s, when the United States created the Foreign Broadcast Information Service (FBIS) which had the sole goal (until the 1990s) of, “primarily monitoring and translating foreign-press sources,” and contributing significantly during the dissolution of the Soviet Union. It was also during this time that the FBIS transformed itself from a purely interpretation agency into one that could adequately utilize the advances made by, “personal computing, large-capacity digital storage, capable search engines, and broadband communication networks”. In 2005, the FBIS was placed under the Office of the Director of National Intelligence (ODNI) and renamed the Open Source Center, with control being given to the CIA.
OSINT compliments the other intelligence disciplines very well. Due to OSINT’s ability to be more in touch with public data (as opposed to information that is more gleaned from interrogations, interviews with defectors or captured enemies or from clandestine wiretaps and electronic intrusions), it allows policymakers and intelligence analysts the ability to see the wider picture of the information gleaned. In Lowenthal’s own book, he mentions how policymakers (including the Assistant Secretary of Defense and one of the former Directors of National Intelligence (DNI)) enjoyed looking at OSINT first and using it as a “starting point… [to fill] the outer edges of the jigsaw puzzle”.
Given the 21stcentury and the public’s increased reliance upon technology, there are also times when information can only be gleaned from open source intelligence methods. Because “Terrorist movements rely essentially on the use of open sources… to recruit and provide virtual training and conduct their operations using encryption techniques… OSINT can be valuable [in] providing fast coordination among officials at all levels without clearances”. Intelligence agencies could be able to outright avoid or, at a minimum, be able to prepare a defense or place forces and units on high alert for an imminent attack.
In a King’s College-London research paper discussing OSINT’s potential for the 21stcentury, the author notes, “OSINT sharing among intelligence services, non-government organizations and international organizations could shape timely and comprehensive responses [to international crises or regime changes in rogue states like Darfur or Burma],” as well as providing further information on a country’s new government or personnel in power. This has been exemplified best during the rise of Kim Jong-Un in North Korea and during the 2011 Arab Spring and 2010 earthquake that rocked Haiti. However, this does not mean that OSINT is a superior discipline than other forms such as SIGINT and HUMINT, as they are subject to limitations as well. According to the Federation of American Scientists, “Open source intelligence does have limitations. Often articles in military or scientific journals represent a theoretical or desired capability rather than an actual capability. Censorship may also limit the publication of key data needed to arrive at a full understanding of an adversary’s actions, or the press may be used as part of a conscious deception effort”.
There is also a limit to the effectiveness of OSINT within the U.S. Intelligence Community (IC), not because it is technically limited, but limited by the desire of the IC to see OSINT as a full-fledged discipline. Robert Ashley and Neil Wiley, the former Director of the Defense Intelligence Agency (DIA) and a former Principal Executive within the ODNI respectively, covered this in a July article for DefenseOne, stating “…the production of OSINT is not regarded as a unique intelligence discipline but as research incident to all-source analysis or as a media production service… OSINT, on the other hand, remains a distributed activity that functions more like a collection of cottage industries. While OSINT has pockets of excellence, intelligence community OSINT production is largely initiative based, minimally integrated, and has little in the way of common guidance, standards, and tradecraft… The intelligence community must make OSINT a true intelligence discipline on par with the traditional functional disciplines, replete with leadership and authority that enables the OSINT enterprise to govern itself and establish a brand that instills faith and trust in open source information”. This apprehensiveness by the IC to OSINT capabilities has been well documented by other journalists.
Some contributors, including one writing for The Hill, has commented that “the use of artificial intelligence and rapid data analytics can mitigate these risks by tipping expert analysts on changes in key information, enabling the rapid identification of apparent “outliers” and pattern anomalies. Such human-machine teaming exploits the strengths of both and offers a path to understanding and even protocols for how trusted open-source intelligence can be created by employing traditional tradecraft of verifying and validating sourcing prior to making the intelligence insights available for broad consumption”. Many knowledgeable and experienced persons within the Intelligence Community, either coming from the uniformed intelligence services or civilian foreign intelligence agencies, recognize the need for better OSINT capabilities as a whole and have also suggested ways in which potential security risks or flaws can be avoided in making this discipline an even more effective piece of the intelligence gathering framework.
OSINT is incredibly beneficial for gathering information that cannot always be gathered through more commonly thought of espionage methods (e.g., HUMINT, SIGINT). The discipline allows for information on previously unknown players or new and developing events to become known and allows policymakers to be briefed more competently on a topic as well as providing analysts and operators a preliminary understanding of the region, the culture, the politics, and current nature of a developing or changing state. However, the greatest hurdle in making use of OSINT is in changing the culture and the way in which the discipline is currently seen by the U.S. Intelligence Community. This remains the biggest struggle in effectively coordinating and utilizing the intelligence discipline within various national security organizations.
Online Radicalization in India
Radicalization, is a gradual process of developing extremist beliefs, emotions, and behaviours at individual, group or mass public levels. Besides varied groups, it enjoys patronization, covertly and even overtly from some states. To elicit change in behavior, beliefs, ideology, and willingness, from the target-group, even employment of violent means is justified. Despite recording a declination in terror casualties, the 2019 edition of the Global Terrorism Index claims an increase in the number of terrorism-affected countries. With internet assuming a pivotal role in simplifying and revolutionizing the communication network and process, the change in peoples’ lives is evident. Notably, out of EU’s 84 %, daily internet using population, 81%, access it from home (Eurostat, 2012, RAND Paper pg xi). It signifies important changes in society and extremists elements, being its integral part, internet’ role, as a tool of radicalization, cannot be gainsaid. Following disruption of physical and geographical barriers, the radicalized groups are using the advancement in digital technology: to propagate their ideologies; solicit funding; collecting informations; planning/coordinating terror attacks; establishing inter/intra-group communication-networks; recruitment, training and media propaganda to attain global attention.
In recent times, India has witnessed an exponential growth in radicalization-linked Incidents, which apparently belies the official figures of approximate 80-100 cases. The radicalization threat to India is not only from homegrown groups but from cross-border groups of Pakistan and Afghanistan as well as global groups like IS. Significantly, Indian radicalized groups are exploiting domestic grievances and their success to an extent, can mainly be attributed to support from Pakistani state, Jihadist groups from Pakistan and Bangladesh. The Gulf-employment boom for Indian Muslims has also facilitated radicalization, including online, of Indian Muslims. A close look at the modus operandi of these attacks reveals the involvement of local or ‘homegrown’ terrorists. AQIS formed (2016) ‘Ansar Ghazwat-ul-Hind’ in Kashmir with a media wing ‘al-Hurr’.
IS announced its foray into Kashmir in 2016 as part of its Khorasan branch. In December 2017 IS in its Telegram channel used hashtag ‘Wilayat Kashmir’ wherein Kashmiri militants stated their allegiance with IS. IS’ online English Magazine ‘Dabiq’ (Jan. 2016) claimed training of fighters in Bangladesh and Pakistan for attacks from western and Eastern borders into India.Though there are isolated cases of ISIS influence in India, the trend is on the rise. Presently, ISIS and its offshoots through online process are engaged in spreading bases in 12 Indian states. Apart from southern states like Telangana, Kerala, Andhra Pradesh, Karnataka, and Tamil Nadu — where the Iran and Syria-based terrorist outfit penetrated years ago — investigating agencies have found their links in states like Maharashtra, West Bengal, Rajasthan, Bihar, Uttar Pradesh, Madhya Pradesh, and Jammu and Kashmir as well. The Sunni jihadists’ group is now “most active” in these states across the country.
Undermining Indian Threat
Significantly, undermining the radicalization issue, a section of intelligentsia citing lesser number of Indian Muslims joining al-Qaeda and Taliban in Afghanistan and Islamic State (IS) in Iraq, Syria and Middle East, argue that Indian Muslim community does not support radicalism-linked violence unlike regional/Muslim countries, including Pakistan, Afghanistan, Bangladesh and Maldives. They underscore the negligible number of Indian Muslims, outside J&K, who supports separatist movements. Additionally, al- Qaeda and IS who follows the ‘Salafi-Wahabi’ ideological movement, vehemently oppose ‘Hanafi school’ of Sunni Islam, followed by Indian Muslims. Moreover, Indian Muslims follows a moderate version even being followers of the Sunni Ahle-Hadeeth (the broader ideology from which Salafi-Wahhabi movement emanates). This doctrinal difference led to the failure of Wahhabi groups online propaganda.
Radicalisation Strategies/methods: Indian vs global players
India is already confronting the online jihadist radicalization of global jihadist organisations, including al-Qaeda in the Indian Subcontinent (AQIS), formed in September 2014 and Islamic State (IS). However, several indigenous and regional groups such as Indian Mujahideen (IM), JeM, LeT, the Taliban and other online vernacular publications, including Pakistan’s Urdu newspaper ‘Al-Qalam’, also play their role in online radicalisation.
Indian jihadist groups use a variety of social media apps, best suited for their goals. Separatists and extremists in Kashmir, for coordination and communication, simply create WhatsApp groups and communicate the date, time and place for carrying out mass protests or stone pelting. Pakistan-based terror groups instead of online learning of Islam consider it mandatory that a Muslim radical follows a revered religious cleric. They select people manually to verify their background instead of online correspondence. Only after their induction, they communicate online with him. However, the IS, in the backdrop of recent defeats, unlike Kashmiri separatist groups and Pak-based jihadist mercenaries, runs its global movement entirely online through magazines and pamphlets. The al-Qaeda’s you tube channels ‘Ansar AQIS’ and ‘Al Firdaws’, once having over 25,000 subscriptions, are now banned. Its online magazines are Nawai Afghan and Statements are in Urdu, English, Arabic, Bangla and Tamil. Its blocked Twitter accounts, ‘Ansarul Islam’ and ‘Abna_ul_Islam_media’, had a following of over 1,300 while its Telegram accounts are believed to have over 500 members.
Adoption of online platforms and technology
Initially, Kashmir based ‘Jaish-E-Mohammad’ (JeM) distributed audio cassettes of Masood Azhar’s speeches across India but it joined Internet platform during the year 2003–04 and started circulating downloadable materials through anonymous links and emails. Subsequently, it started its weekly e-newspaper, Al-Qalam, followed by a chat group on Yahoo. Importantly, following enhanced international pressure on Pak government after 26/11, to act against terrorist groups, JeM gradually shifted from mainstream online platform to social media sites, blogs and forums.
Indian Mujahideen’s splinter group ‘Ansar-ul-Tawhid’ the first officially affiliated terror group to the ISIS tried to maintain its presence on ‘Skype’, ‘WeChat’ and ‘JustPaste’. IS and its affiliates emerged as the most tech-savvy jihadist group. They took several measures to generate new accounts after repeated suspension of their accounts by governments. An account called as ‘Baqiya Shoutout’ was one such measure. It stressed upon efforts to re-establish their network of followers through ‘reverse shout-out’ instead of opening a new account easily.
Pakistan-backed terrorist groups in India are increasingly becoming technology savvy. For instance, LeT before carrying out terrorist attacks in 2008 in Mumbai, used Google Earth to understand the targeted locations.
IS members have been following strict security measures like keeping off their Global Positioning System (GPS) locations and use virtual private network (VPN), to maintain anonymity. Earlier they were downloading Hola VPN or a similar programme from a mobile device or Web browser to select an Internet Protocol (IP) address for a country outside the US, and bypass email or phone verification.
Rise of radicalization in southern India
Southern states of India have witnessed a rise in radicalization activities during the past 1-2 years. A substantial number of Diaspora in the Gulf countries belongs to Kerala and Tamil Nadu. Several Indian Muslims in Gulf countries have fallen prey to radicalization due to the ultra-conservative forms of Islam or their remittances have been misused to spread radical thoughts. One Shafi Armar@ Yusuf-al-Hindi from Karnataka emerged as the main online IS recruiter for India. It is evident in the number of raids and arrests made in the region particularly after the Easter bomb attacks (April, 21, 2019) in Sri Lanka. The perpetrators were suspected to have been indoctrinated, radicalised and trained in the Tamil Nadu. Further probe revealed that the mastermind of the attacks, Zahran Hashim had travelled to India and maintained virtual links with radicalised youth in South India. Importantly, IS, while claiming responsibility for the attacks, issued statements not only in English and Arabic but also in South Indian languages viz. Malayalam and Tamil. It proved the existence of individuals fluent in South Indian languages in IS linked groups in the region. Similarly, AQIS’ affiliate in South India ‘Base Movement’ issued several threatening letters to media publications for insulting Islam.
IS is trying to recruit people from rural India by circulating the online material in vernacular languages. It is distributing material in numerous languages, including Malayalam and Tamil, which Al Qaeda were previously ignoring in favour of Urdu. IS-linked Keralite followers in their propaganda, cited radical pro-Hindutva, organisations such as the Rashtriya Swayam Sevak (RSS) and other right-wing Hindu organisations to motivate youth for joining the IS. Similarly, Anti-Muslim incidents such as the demolition of the Babri Masjid in 1992 are still being used to fuel their propaganda. IS sympathisers also support the need to oppose Hindu Deities to gather support.
Radicalization: Similarities/Distinctions in North and South
Despite few similarities, the radicalisation process in J&K is somewhat different from the states of Kerala, Karnataka, Tamil Nadu, Andhra Pradesh, Maharashtra, Telangana and Gujarat. Both the regions have witnessed a planned radicalization process through Internet/social media for propagating extremist ideologies and subverting the vulnerable youth. Both the areas faced the hard-line Salafi/Wahhabi ideology, propagated by the extremist Islamic clerics and madrasas indulged in manipulating the religion of Islam. Hence, in this context it can be aptly claimed that terror activities in India have cooperation of elements from both the regions, despite their distinct means and objectives. Elements from both regions to an extent sympathise to the cause of bringing India under the Sharia Law. Hence, the possibility of cooperation in such elements cannot be ruled out particularly in facilitation of logistics, ammunitions and other requisite equipment.
It is pertinent to note that while radicalisation in Jammu and Kashmir is directly linked to the proxy-war, sponsored by the Pakistan state, the growth of radicalisation in West and South India owes its roots to the spread of IS ideology, promotion of Sharia rule and establishment of Caliphate. Precisely for this reason, while radicalised local Kashmiris unite to join Pakistan-backed terror groups to fight for ‘Azadi’ or other fabricated local issues, the locals in south rather remain isolated cases.
Impact of Radicalisation
The impact of global jihad on radicalization is quite visible in West and South India. Majority of the radicalised people, arrested in West and South India, were in fact proceeding to to join IS in Syria and Iraq. It included the group of 22 people from a Kerala’s family, who travelled (June 2016) to Afghanistan via Iran. There obvious motivation was to migrate from Dar-ul-Harb (house of war) to Dar-ul-Islam (house of peace/Islam/Deen).
While comparing the ground impact of radicalization in terms of number of cases of local militants in J&K as well as IS sympathisers in West and South India, it becomes clear that radicalisation was spread more in J&K, owing to Pak-sponsored logistical and financial support. Significantly, despite hosting the third largest Muslim population, the number of Indian sympathisers to terror outfits, particularly in West and South India is very small as compared to the western countries. Main reasons attributed to this, include – religious and cultural pluralism; traditionally practice of moderate Islamic belief-systems; progressive educational and economic standards; and equal socio-economic and political safeguards for the Indian Muslims in the Indian Constitution.
Apart from varied challenges, including Pak-sponsored anti-India activities, regional, local and political challenges, media wings of global jihadi outfits continue to pose further challenges to Indian security agencies. While IS through its media wing, ‘Al Isabah’ has been circulating (through social media sites) Abu Bakr al Baghdadi’s speeches and videos after translating them into Urdu, Hindi, and Tamil for Indian youth (Rajkumar 2015), AQIS too have been using its media wing for the very purpose through its offshoots in India. Some of the challenges, inter alia include –
Islam/Cleric Factor – Clerics continue to play a crucial role in influencing the minds of Muslim youth by exploiting the religion of Islam. A majority of 127 arrested IS sympathizers from across India recently revealed that they were following speeches of controversial Indian preacher Zakir Naik of Islamic Research Foundation (IRF). Zakir has taken refuge in Malaysia because of warrants against him by the National Investigation Agency (NIA) for alleged money laundering and inciting extremism through hate speeches. A Perpetrator of Dhaka bomb blasts in July 2016 that killed several people confessed that he was influenced by Naik’s messages. Earlier, IRF had organised ‘peace conferences’ in Mumbai between 2007 and 2011 in which Zakir attempted to convert people and incite terrorist acts. Thus, clerics and preachers who sbverts the Muslim minds towards extremism, remain a challenge for India.
Propaganda Machinery – The online uploading of young militant photographs, flaunting Kalashnikov rifles became the popular means of declaration of youth intent against government forces. Their narrative of “us versus them” narrative is clearly communicated, creating groundswell of support for terrorism.In its second edition (March 2020) of its propaganda magazine ‘Sawt al-Hind’ (Voice of Hind/India) IS, citing an old propaganda message from a deceased (2018) Kashmiri IS terrorist, Abu Hamza al-Kashmiri @ Abdul Rehman, called upon Taliban apostates and fighters to defect to IS. In the first edition (Feb. 2020) the magazine, eulogized Huzaifa al-Bakistani (killed in 2019), asking Indian Muslims to rally to IS in the name of Islam in the aftermath of the 2020 Delhi riots. Meanwhile, a Muslim couple arrested by Delhi Police for inciting anti-CAA (Citizenship Amendment) Bill protests, were found very active on social media. They would call Indian Muslims to unite against the Indian government against the CAA legislation. During 2017 Kashmir unrest, National Investigation Agency (NIA) identified 79 WhatsApp groups (with administrators based in Pakistan), having 6,386 phone numbers, to crowd source boys for stone pelting. Of these, around 1,000 numbers were found active in Pakistan and Gulf nations and the remaining 5,386 numbers were found active in Kashmir Valley.
Deep fakes/Fake news – Another challenge for India is spread of misinformation and disinformation through deep fakes by Pakistan. Usage of deepfakes, in manipulating the speeches of local political leaders to spread hate among the youth and society was done to large extent.
India’s Counter Measures
To prevent youth straying towards extremism, India’s Ministry of Home Affairs has established a Counter-Terrorism and Counter-Radicalisation Division (CT-CR) to help states, security agencies and communities.
Various states, including Kerala, Maharashtra and Telangana have set up their own de-radicalisation programmes. While in Maharashtra family and community plays an important role, in Kerala clerics cleanse the poisoned minds of youth with a new narrative. A holistic programme for community outreach including healthcare, clergies and financial stability is being employed by the Indian armed forces. An operation in Kerala named Kerala state police’ ‘Operation Pigeon’ succeeded in thwarting radicalization of 350 youths to the propaganda of organizations such as Islamic State, Indian Mujahideen (IM) and Lashkar-e-Taiba (LeT) via social media monitoring. In Telangana, outreach programs have been developed by local officers like Rema Rajeshwari to fight the menace of fake news in around 400 villages of the state.
In Kashmir the government resorts to internet curfews to control the e-jihad. While state-owned BNSL network, used by the administration and security forces, remains operational 3G and 4G networks and social media apps remain suspended during internet curfews.
India certainly needs a strong national counter- Radicalisation policy which would factor in a range of factors than jobs, poverty or education because radicalization in fact has affected even well educated, rich and prosperous families. Instead of focusing on IS returnees from abroad, the policy must take care of those who never travelled abroad but still remain a potential threat due to their vulnerability to radicalization.
Of course, India would be better served if deep fakes/fake news and online propaganda is effectively countered digitally as well as through social awakening measures and on ground action by the government agencies. It is imperative that the major stakeholders i.e. government, educational institutions, civil society organisations, media and intellectuals play a pro-active role in pushing their narrative amongst youth and society. The focus should apparently be on prevention rather than controlling the radicalisation narrative of the vested interests.
Is Deterrence in Cyberspace Possible?
Soon after the Internet was founded, half of the world’s population (16 million) in 1996 had been connected to Internet data traffic. Gradually, the Internet began to grow and with more users, it contributed to the 4 trillion global economies in 2016 (Nye, 2016). Today, high-speed Internet, cutting-edge technologies and gadgets, and increasing cross-border Internet data traffic are considered an element of globalization. Deterrence seems traditional and obsolete strategy, but the developed countries rely on cyberspace domains to remain in the global digitization. No matter how advanced they are, there still exist vulnerabilities. There are modern problems in the modern world. Such reliance on the Internet also threatens to blow up the dynamics of international insecurity. To understand and explore the topic it is a must for one to understand what cyberspace and deterrence are? According to Oxford dictionary;
“Cyberspace is the internet considered as an imaginary space without a physical location in which communication over computer networks takes place (OXFORD University Press)”
For readers to understand the term ‘deterrence’; Collins dictionary has best explained it as;
“Deterrence is the prevention of something, especially war or crime, by having something such as weapons or punishment to use as a threat e.g. Nuclear Weapons (Deterrence Definition and Meaning | Collins English Dictionary).”
The purpose of referring to the definition is to make it easy to discern and distinguish between deterrence in International Relations (IR) and International Cyber Security (ICS). Deterrence in cyberspace is different and difficult than that of during the Cold War. The topic of deterrence was important during Cold Wat for both politicians and academia. The context in both dimensions (IR and ICS) is similar and aims to prevent from happening something. Cyberspace deterrence refers to preventing crime and I completely agree with the fact that deterrence is possible in Cyberspace. Fischer (2019) quotes the study of (Quinlan, 2004) that there is no state that can be undeterrable.
To begin with, cyber threats are looming in different sectors inclusive of espionage, disruption of the democratic process and sabotaging the political arena, and war. Whereas international law is still unclear about these sectors as to which category they fall in. I would validate my affirmation (that deterrence is possible in Cyberspace) with the given network attacks listed by Pentagon (Fung, 2013). Millions of cyber-attacks are reported on a daily basis. The Pentagon reported 10 million cyberspace intrusions, most of which are disruptive, costly, and annoying. The level of severity rises to such a critical level that it is considered a threat to national security, so professional strategic assistance is needed to deal with it. The past events show a perpetual threat that has the ability to interrupt societies, economies, and government functioning.
The cyberspace attacks were administered and portrayal of deterrence had been publicized as follows (Fung, 2013);
- The internet service was in a continuous disruption for several weeks after a dispute with Russia in 2007.
- Georgian defense communications were interrupted in 2008 after the Russian invasion of Georgia.
- More than 1000 centrifuges in Iran were destroyed via the STUXNET virus in 2010. The attacks were attributed to Israel and the United States of America.
- In response to STUXNET virus attacks, Iran also launched a retaliatory attack on U.S financial institutions in 2012 and 2013.
- Similarly in 2012, some 30,000 computers had been destroyed with a virus called SHAMOON in Saudi Aramco Corporation. Iran was held responsible for these attacks.
- North Korea was accused of penetrating South Korean data and machines in 2014, thus interrupting their networks in 2014.
- A hybrid war was reported between Russia and Ukraine in 2015 that left Ukraine without electricity for almost six hours.
- Most critical scandal, which is still in the limelight call WikiLeaks released distressing and humiliating emails by Russian Intelligence at the time of the U.S presidential campaigns in 2016.
While such incidents may be considered a failure of deterrence, this does not mean that deterrence is impossible. Every system has some flaws that are exposed at some point. At this point, in some cases a relatively low level of deterrence was used to threaten national security, however, the attacks were quite minor in fulfilling the theme affecting national security. Nye (2016:51) in his study talks about the audience whose attribution could facilitate deterrence. (I). intelligence agencies should make sure highest safeguarding against escalation by third parties, and governments can also be certain and count on intelligence agencies’ sources. (II). the deterring party should not be taken easy, as I stated (above) about the lingering loopholes and flaws in the systems, hence, governments shall not perceive the intelligence forsaken. (III). lastly, it is a political matter whether international and domestic audiences need to be persuaded or not, and what chunk of information should be disclosed.
The mechanisms which are used and helpful against cyberspace adversary actions are as follows (Fischer, 2019);
- Deterrence by denial means, the actions by the adversary are denied that they failed to succeed in their goals and objectives. It is more like retaliating a cyberattack.
- Threat of punishment offers severe outcomes in form of penalties and inflicting high costs on the attacker that would outweigh the anticipated benefits if the attack takes place.
- Deterrence by Entanglement has the features and works on a principle of shared, interconnected, and dependent vulnerabilities. The purpose of entanglement is to embolden and reassure the behavior as a responsible state with mutual interests.
- Normative taboos function with strong values and norms, wherein the reputation of an aggressor is at stake besides having a soft image in the eyes of the international community (this phenomenon includes rational factors because hard power is used against the weaker state). The deterrence of the international system works even without having any credible resilience.
Apparently, the mechanisms of deterrence are also effective in cyber realms. These realms are self-explaining the comprehensive understanding and the possibility of deterrence in cyberspace. The four mechanisms (denial, punishment, entanglement, and normative taboos) are also feasible to apply deterrence in the cyber world. Factually, of many security strategies, cyber deterrence by using four domains could be a versatile possibility. Conclusively, as far as the world is advancing in technological innovations, cyberspace intrusions would not stop alike the topic of deterrence in the digital world.
 An updated list of cyberspace intrusions from 2003 till 2021 is available at (Center for Strategic and International Studies, 2021).
King Mohammed VI of Morocco launches Pan-African Giant Vaccine Production Plant
Morocco is getting ready to produce its own vaccines. In Benslimane, King Mohammed VI kicked off on Thursday 27th of...
Environment contaminated with highly toxic substances, risking the health of nearby communities
New research published today by Zero Waste Europe (ZWE) about incinerators in three countries – Spain, Czechia, and Lithuania –...
Shaking Things Up: A Feminist Pakistani Foreign Policy
Almost eight years ago, under Foreign Minister Margot Wallstrom in 2014, Sweden created its first of a kind feminist foreign...
Indonesia’s contribution in renewables through Rare Earth Metals
The increasing of technological advances, the needs of each country are increasing. The discovery of innovations, the production of goods...
Test of Babur Cruise Missile: Pakistan Strengthening its Strategic Deterrence
A month of December 2021 Pakistan successfully tested “indigenously developed” Babur cruise missile 1b. In this recent test, Pakistan enhanced...
The Middle East Rush to Bury Hatchets: Is it sustainable?
How sustainable is Middle Eastern détente? That is the $64,000 question. The answer is probably not. It’s not for lack...
Scientists turn underwater gardeners to save precious marine plant
Whoever said there’s nothing more boring than watching grass grow, wasn’t thinking about seagrass. Often confused with seaweeds and rarely...
International Law4 days ago
Psychology of Political Power : Does Power Corrupt or is Magnetic to the Most Corruptible?
Middle East4 days ago
Embarking on Libya’s Noble Foray Into the Future
East Asia4 days ago
“Post-Communism Era”, “Post-Democracy Era”, in the face of “authoritarian liberalism”
Southeast Asia3 days ago
Spreading Indonesia’s Nation Branding Through “Kopi Kenangan”
East Asia3 days ago
The role of China in fighting of fascism and racism
East Asia3 days ago
The American politicization of the Beijing Winter Olympics, and the “post-truth era” theory
Eastern Europe4 days ago
The Stewards of Hate
Economy3 days ago
2022: Rise of Economic Power of Small Medium Businesses across the World