Authors: Hannes Grassegger and Mikael Krogerus
[yt_dropcap type=”square” font=”” size=”14″ color=”#000″ background=”#fff” ] A [/yt_dropcap]egean theater of the Antique Greece was the place of astonishing revelations and intellectual excellence – a remarkable density and proximity, not surpassed up to our age. All we know about science, philosophy, sports, arts, culture and entertainment, stars and earth has been postulated, explored and examined then and there.
Simply, it was a time and place of triumph of human consciousness, pure reasoning and sparkling thought. However, neither Euclid, Anaximander, Heraclites, Hippocrates (both of Chios, and of Cos), Socrates, Archimedes, Ptolemy, Democritus, Plato, Pythagoras, Diogenes, Aristotle, Empedocles, Conon, Eratosthenes nor any of dozens of other brilliant ancient Greek minds did ever refer by a word, by a single sentence to something which was their everyday life, something they saw literally on every corner along their entire lives. It was an immoral, unjust, notoriously brutal and oppressive slavery system that powered the Antique state. (Slaves have not been even attributed as humans, but rather as the ‘phonic tools/tools able to speak’.) This myopia, this absence of critical reference on the obvious and omnipresent is a historic message – highly disturbing, self-telling and quite a warning.” – notes prof. Anis H. Bajrektarevic in his luminary book of 2013, ‘Is there like after Facebook? – Geopolitics of Technology’.
Indeed, why do we constantly ignore massive and sustain harvesting of our personal data from the social networks, medical records, pay-cards, internet and smart phones as well as its commercialization and monetization for dubious ends and disturbing futures.
Professor Bajrektarevic predicts and warns: “If humans hardly ever question fetishisation of their own McFB way of life, or oppose the (self-) trivialization, why then is the subsequent brutalization a surprise to them?”
Thus, should we be really surprise with the Brexit vote, with the results of the US elections, and with the forcoming massive wins of the right-wing parties all over Europe? Putin is behind it !! – how easy, and how misleading a self-denial.
Here is a story based on facts, if we are only interested to really grasp the Matrix world. The Iron Cage we constructed ourselves.
On November 9 at around 8.30 AM., Michal Kosinski woke up in the Hotel Sunnehus in Zurich. The 34-year-old researcher had come to give a lecture at the Swiss Federal Institute of Technology (ETH) about the dangers of Big Data and the digital revolution. Kosinski gives regular lectures on this topic all over the world. He is a leading expert in psychometrics, a data-driven sub-branch of psychology. When he turned on the TV that morning, he saw that the bombshell had exploded: contrary to forecasts by all leading statisticians, Donald J. Trump had been elected president of the United States.
For a long time, Kosinski watched the Trump victory celebrations and the results coming in from each state. He had a hunch that the outcome of the election might have something to do with his research. Finally, he took a deep breath and turned off the TV.
On the same day, a then little-known British company based in London sent out a press release: “We are thrilled that our revolutionary approach to data-driven communication has played such an integral part in President-elect Trump’s extraordinary win,” Alexander James Ashburner Nix was quoted as saying. Nix is British, 41 years old, and CEO of Cambridge Analytica. He is always immaculately turned out in tailor-made suits and designer glasses, with his wavy blonde hair combed back from his forehead. His company wasn’t just integral to Trump’s online campaign, but to the UK’s Brexit campaign as well.
Of these three players—reflective Kosinski, carefully groomed Nix and grinning Trump—one of them enabled the digital revolution, one of them executed it and one of them benefited from it.
How dangerous is big data?
Anyone who has not spent the last five years living on another planet will be familiar with the term Big Data. Big Data means, in essence, that everything we do, both on and offline, leaves digital traces. Every purchase we make with our cards, every search we type into Google, every movement we make when our mobile phone is in our pocket, every “like” is stored. Especially every “like.” For a long time, it was not entirely clear what use this data could have—except, perhaps, that we might find ads for high blood pressure remedies just after we’ve Googled “reduce blood pressure.”
On November 9, it became clear that maybe much more is possible. The company behind Trump’s online campaign—the same company that had worked for Leave.EU in the very early stages of its “Brexit” campaign—was a Big Data company: Cambridge Analytica.
To understand the outcome of the election—and how political communication might work in the future—we need to begin with a strange incident at Cambridge University in 2014, at Kosinski’s Psychometrics Center.
Psychometrics, sometimes also called psychographics, focuses on measuring psychological traits, such as personality. In the 1980s, two teams of psychologists developed a model that sought to assess human beings based on five personality traits, known as the “Big Five.” These are: openness (how open you are to new experiences?), conscientiousness (how much of a perfectionist are you?), extroversion (how sociable are you?), agreeableness (how considerate and cooperative you are?) and neuroticism (are you easily upset?). Based on these dimensions—they are also known as OCEAN, an acronym for openness, conscientiousness, extroversion, agreeableness, neuroticism—we can make a relatively accurate assessment of the kind of person in front of us. This includes their needs and fears, and how they are likely to behave. The “Big Five” has become the standard technique of psychometrics. But for a long time, the problem with this approach was data collection, because it involved filling out a complicated, highly personal questionnaire. Then came the Internet. And Facebook. And Kosinski.
Michal Kosinski was a student in Warsaw when his life took a new direction in 2008. He was accepted by Cambridge University to do his PhD at the Psychometrics Centre, one of the oldest institutions of this kind worldwide. Kosinski joined fellow student David Stillwell (now a lecturer at Judge Business School at the University of Cambridge) about a year after Stillwell had launched a little Facebook application in the days when the platform had not yet become the behemoth it is today. Their MyPersonality app enabled users to fill out different psychometric questionnaires, including a handful of psychological questions from the Big Five personality questionnaire (“I panic easily,” “I contradict others”). Based on the evaluation, users received a “personality profile”—individual Big Five values—and could opt-in to share their Facebook profile data with the researchers.
Kosinski had expected a few dozen college friends to fill in the questionnaire, but before long, hundreds, thousands, then millions of people had revealed their innermost convictions. Suddenly, the two doctoral candidates owned the largest dataset combining psychometric scores with Facebook profiles ever to be collected.
The approach that Kosinski and his colleagues developed over the next few years was actually quite simple. First, they provided test subjects with a questionnaire in the form of an online quiz. From their responses, the psychologists calculated the personal Big Five values of respondents. Kosinski’s team then compared the results with all sorts of other online data from the subjects: what they “liked,” shared or posted on Facebook, or what gender, age, place of residence they specified, for example. This enabled the researchers to connect the dots and make correlations.
Remarkably reliable deductions could be drawn from simple online actions. For example, men who “liked” the cosmetics brand MAC were slightly more likely to be gay; one of the best indicators for heterosexuality was “liking” Wu-Tang Clan. Followers of Lady Gaga were most probably extroverts, while those who “liked” philosophy tended to be introverts. While each piece of such information is too weak to produce a reliable prediction, when tens, hundreds, or thousands of individual data points are combined, the resulting predictions become really accurate.
Kosinski and his team tirelessly refined their models. In 2012, Kosinski proved that on the basis of an average of 68 Facebook “likes” by a user, it was possible to predict their skin color (with 95 percent accuracy), their sexual orientation (88 percent accuracy), and their affiliation to the Democratic or Republican party (85 percent). But it didn’t stop there. Intelligence, religious affiliation, as well as alcohol, cigarette and drug use, could all be determined. From the data it was even possible to deduce whether someone’s parents were divorced.
The strength of their modeling was illustrated by how well it could predict a subject’s answers. Kosinski continued to work on the models incessantly: before long, he was able to evaluate a person better than the average work colleague, merely on the basis of ten Facebook “likes.” Seventy “likes” were enough to outdo what a person’s friends knew, 150 what their parents knew, and 300 “likes” what their partner knew. More “likes” could even surpass what a person thought they knew about themselves. On the day that Kosinski published these findings, he received two phone calls. The threat of a lawsuit and a job offer. Both from Facebook.
Only weeks later Facebook “likes” became private by default. Before that, the default setting was that anyone on the internet could see your “likes.” But this was no obstacle to data collectors: while Kosinski always asked for the consent of Facebook users, many apps and online quizzes today require access to private data as a precondition for taking personality tests. (Anybody who wants to evaluate themselves based on their Facebook “likes” can do so on Kosinski’s website, and then compare their results to those of a classic Ocean questionnaire, like that of the Cambridge Psychometrics Center.)
But it was not just about “likes” or even Facebook: Kosinski and his team could now ascribe Big Five values based purely on how many profile pictures a person has on Facebook, or how many contacts they have (a good indicator of extraversion). But we also reveal something about ourselves even when we’re not online. For example, the motion sensor on our phone reveals how quickly we move and how far we travel (this correlates with emotional instability). Our smartphone, Kosinski concluded, is a vast psychological questionnaire that we are constantly filling out, both consciously and unconsciously.
Above all, however—and this is key—it also works in reverse: not only can psychological profiles be created from your data, but your data can also be used the other way round to search for specific profiles: all anxious fathers, all angry introverts, for example—or maybe even all undecided Democrats? Essentially, what Kosinski had invented was sort of a people search engine. He started to recognize the potential—but also the inherent danger—of his work.
To him, the internet was a gift from heaven. What he really wanted was to give something back, to share. Data can be copied, so why shouldn’t everyone benefit from it? It was the spirit of Millenials, entire new generation, the beginning of a new era that transcended the limitations of the physical world. But what would happen, wondered Kosinski, if someone abused his people search engine to manipulate people? He began to add warnings to most of his scientific work. His approach, he warned, “could pose a threat to an individual’s well-being, freedom, or even life.” But no one seemed to grasp what he meant.
Around this time, in early 2014, Kosinski was approached by a young assistant professor in the psychology department called Aleksandr Kogan. He said he was inquiring on behalf of a company that was interested in Kosinski’s method, and wanted to access the MyPersonality database. Kogan wasn’t at liberty to reveal for what purpose; he was bound to secrecy.
At first, Kosinski and his team considered this offer, as it would mean a great deal of money for the institute, but then he hesitated. Finally, Kosinski remembers, Kogan revealed the name of the company: SCL, or Strategic Communication Laboratories. Kosinski Googled the company: “[We are] the premier election management agency,” says the company’s website. SCL provides marketing based on psychological modeling. One of its core focuses: Influencing elections. Influencing elections? Perturbed, Kosinski clicked through the pages. What kind of company was this? And what were these people planning?
What Kosinski did not know at the time: SCL is the parent of a group of companies. Who exactly owns SCL and its diverse branches is unclear, thanks to a convoluted corporate structure, the type seen in the UK Companies House, the Panama Papers, and the Delaware company registry. Some of the SCL offshoots have been involved in elections from Ukraine to Nigeria, helped the Nepalese monarch against the Maoists, whereas others have developed methods to influence Eastern Euripean and Afghan citizens for NATO. And, in 2013, SCL spun off a new company to participate in US elections: Cambridge Analytica.
Kosinski knew nothing about all this, but he had a bad feeling. “The whole thing started to stink,” he recalls. On further investigation, he discovered that Aleksandr Kogan had secretly registered a company doing business with SCL. According to a December 2015 report in the Guardian and to internal company documents given to Das Magazin, it emerges that SCL learned about Kosinski’s method from Kogan.
Kosinski came to suspect that Kogan’s company might have reproduced the Facebook “Likes”-based Big Five measurement tool in order to sell it to this election-influencing firm. He immediately broke off contact with Kogan and informed the director of the institute, sparking a complicated conflict within the university. The institute was worried about its reputation. Aleksandr Kogan then moved to Singapore, married, and changed his name to Dr. Spectre. Michal Kosinski finished his PhD, got a job offer from Stanford and moved to the US.
All was quiet for about a year. Then, in November 2015, the more radical of the two Brexit campaigns, “Leave.EU,” supported by Nigel Farage, announced that it had commissioned a Big Data company to support its online campaign: Cambridge Analytica. The company’s core strength: innovative political marketing—microtargeting—by measuring people’s personality from their digital footprints, based on the OCEAN model.
Now Kosinski received emails asking what he had to do with it—the words Cambridge, personality, and analytics immediately made many people think of Kosinski. It was the first time he had heard of the company, which borrowed its name, it said, from its first employees, researchers from the university. Horrified, he looked at the website. Was his methodology being used on a grand scale for political purposes?
After the Brexit result, friends and acquaintances wrote to him: Just look at what you’ve done. Everywhere he went, Kosinski had to explain that he had nothing to do with this company. (It remains unclear how deeply Cambridge Analytica was involved in the Brexit campaign. Cambridge Analytica would not discuss such questions.)
For a few months, things are relatively quiet. Then, on September 19, 2016, just over a month before the US elections, the guitar riffs of Creedence Clearwater Revival’s “Bad Moon Rising” fill the dark-blue hall of New York’s Grand Hyatt hotel. The Concordia Summit is a kind of World Economic Forum in miniature. Decision-makers from all over the world have been invited, among them Swiss President Johann Schneider-Ammann. “Please welcome to the stage Alexander Nix, chief executive officer of Cambridge Analytica,” a smooth female voice announces. A slim man in a dark suit walks onto the stage. A hush falls. Many in attendance know that this is Trump’s new digital strategy man. (A video of the presentation was posted on YouTube.)
A few weeks earlier, Trump had tweeted, somewhat cryptically, “Soon you’ll be calling me Mr. Brexit.” Political observers had indeed noticed some striking similarities between Trump’s agenda and that of the right-wing Brexit movement. But few had noticed the connection with Trump’s recent hiring of a marketing company named Cambridge Analytica.
Up to this point, Trump’s digital campaign had consisted of more or less one person: Brad Parscale, a marketing entrepreneur and failed start-up founder who created a rudimentary website for Trump for $1,500. The 70-year-old Trump is not digitally savvy—there isn’t even a computer on his office desk. Trump doesn’t do emails, his personal assistant once revealed. She herself talked him into having a smartphone, from which he now tweets incessantly.
Hillary Clinton, on the other hand, relied heavily on the legacy of the first “social-media president,” Barack Obama. She had the address lists of the Democratic Party, worked with cutting-edge big data analysts from BlueLabs and received support from Google and DreamWorks. When it was announced in June 2016 that Trump had hired Cambridge Analytica, the establishment in Washington just turned up their noses. Foreign dudes in tailor-made suits who don’t understand the country and its people? Seriously?
“It is my privilege to speak to you today about the power of Big Data and psychographics in the electoral process.” The logo of Cambridge Analytica— a brain composed of network nodes, like a map, appears behind Alexander Nix. “Only 18 months ago, Senator Cruz was one of the less popular candidates,” explains the blonde man in a cut-glass British accent, which puts Americans on edge the same way that a standard German accent can unsettle Swiss people. “Less than 40 percent of the population had heard of him,” another slide says. Cambridge Analytica had become involved in the US election campaign almost two years earlier, initially as a consultant for Republicans Ben Carson and Ted Cruz. Cruz—and later Trump—was funded primarily by the secretive US software billionaire Robert Mercer who, along with his daughter Rebekah, is reported to be the largest investor in Cambridge Analytica.
“So how did he do this?” Up to now, explains Nix, election campaigns have been organized based on demographic concepts. “A really ridiculous idea. The idea that all women should receive the same message because of their gender—or all African Americans because of their race.” What Nix meant is that while other campaigners so far have relied on demographics, Cambridge Analytica was using psychometrics.
Though this might be true, Cambridge Analytica’s role within Cruz’s campaign isn’t undisputed. In December 2015 the Cruz team credited their rising success to psychological use of data and analytics. In Advertising Age, a political client said the embedded Cambridge staff was “like an extra wheel,” but found their core product, Cambridge’s voter data modeling, still “excellent.” The campaign would pay the company at least $5.8 million to help identify voters in the Iowa caucuses, which Cruz won, before dropping out of the race in May.
Nix clicks to the next slide: five different faces, each face corresponding to a personality profile. It is the Big Five or OCEAN Model. “At Cambridge,” he said, “we were able to form a model to predict the personality of every single adult in the United States of America.” The hall is captivated. According to Nix, the success of Cambridge Analytica’s marketing is based on a combination of three elements: behavioral science using the OCEAN Model, Big Data analysis, and ad targeting. Ad targeting is personalized advertising, aligned as accurately as possible to the personality of an individual consumer.
Nix candidly explains how his company does this. First, Cambridge Analytica buys personal data from a range of different sources, like land registries, automotive data, shopping data, bonus cards, club memberships, what magazines you read, what churches you attend. Nix displays the logos of globally active data brokers like Acxiom and Experian—in the US, almost all personal data is for sale. For example, if you want to know where Jewish women live, you can simply buy this information, phone numbers included.
Now Cambridge Analytica aggregates this data with the electoral rolls of the Republican party and online data and calculates a Big Five personality profile. Digital footprints suddenly become real people with fears, needs, interests, and residential addresses.
The methodology looks quite similar to the one that Michal Kosinski once developed. Cambridge Analytica also uses, Nix told us, “surveys on social media” and Facebook data. And the company does exactly what Kosinski warned of: “We have profiled the personality of every adult in the United States of America—220 million people,” Nix boasts.
He opens the screenshot. “This is a data dashboard that we prepared for the Cruz campaign.” A digital control center appears. On the left are diagrams; on the right, a map of Iowa, where Cruz won a surprisingly large number of votes in the primary. And on the map, there are hundreds of thousands of small red and blue dots. Nix narrows down the criteria: “Republicans”—the blue dots disappear; “not yet convinced”—more dots disappear; “male”, and so on. Finally, only one name remains, including age, address, interests, personality and political inclination. How does Cambridge Analytica now target this person with an appropriate political message?
Nix shows how psychographically categorized voters can be differently addressed, based on the example of gun rights, the 2nd Amendment: “For a highly neurotic and conscientious audience the threat of a burglary—and the insurance policy of a gun.” An image on the left shows the hand of an intruder smashing a window. The right side shows a man and a child standing in a field at sunset, both holding guns, clearly shooting ducks: “Conversely, for a closed and agreeable audience. People who care about tradition, and habits, and family.”
How to keep Clinton voters away from the ballot box
Trump’s striking inconsistencies, his much-criticized fickleness, and the resulting array of contradictory messages, suddenly turned out to be his great asset: a different message for every voter. The notion that Trump acted like a perfectly opportunistic algorithm following audience reactions is something the mathematician Cathy O’Neil observed in August 2016.
“Pretty much every message that Trump put out was data-driven,” Alexander Nix remembers. On the day of the third presidential debate between Trump and Clinton, Trump’s team tested 175,000 different ad variations for his arguments, in order to find the right versions above all via Facebook. The messages differed for the most part only in microscopic details, in order to target the recipients in the optimal psychological way: different headings, colors, captions, with a photo or video. This fine-tuning reaches all the way down to the smallest groups, Nix explained in an interview with us. “We can address villages or apartment blocks in a targeted way. Even individuals.”
In the Miami district of Little Haiti, for instance, Trump’s campaign provided inhabitants with news about the failure of the Clinton Foundation following the earthquake in Haiti, in order to keep them from voting for Hillary Clinton. This was one of the goals: to keep potential Clinton voters (which include wavering left-wingers, African-Americans, and young women) away from the ballot box, to “suppress” their vote, as one senior campaign official told Bloomberg in the weeks before the election. These “dark posts”—sponsored news-feed-style ads in Facebook timelines that can only be seen by users with specific profiles—included videos aimed at African-Americans in which Hillary Clinton refers to black men as predators, for example.
Nix finishes his lecture at the Concordia Summit by stating that traditional blanket advertising is dead. “My children will certainly never, ever understand this concept of mass communication.” And before leaving the stage, he announced that since Cruz had left the race, the company was helping one of the remaining presidential candidates.
Just how precisely the American population was being targeted by Trump’s digital troops at that moment was not visible, because they attacked less on mainstream TV and more with personalized messages on social media or digital TV. And while the Clinton team thought it was in the lead, based on demographic projections, Bloomberg journalist Sasha Issenberg was surprised to note on a visit to San Antonio—where Trump’s digital campaign was based—that a “second headquarters” was being created. The embedded Cambridge Analytica team, apparently only a dozen people, received $100,000 from Trump in July, $250,000 in August, and $5 million in September. According to Nix, the company earned over $15 million overall. (The company is incorporated in the US, where laws regarding the release of personal data are more lax than in European Union countries. Whereas European privacy laws require a person to “opt in” to a release of data, those in the US permit data to be released unless a user “opts out.”)
The measures were radical: From July 2016, Trump’s canvassers were provided with an app with which they could identify the political views and personality types of the inhabitants of a house. It was the same app provider used by Brexit campaigners. Trump’s people only rang at the doors of houses that the app rated as receptive to his messages. The canvassers came prepared with guidelines for conversations tailored to the personality type of the resident. In turn, the canvassers fed the reactions into the app, and the new data flowed back to the dashboards of the Trump campaign.
Again, this is nothing new. The Democrats did similar things, but there is no evidence that they relied on psychometric profiling. Cambridge Analytica, however, divided the US population into 32 personality types, and focused on just 17 states. And just as Kosinski had established that men who like MAC cosmetics are slightly more likely to be gay, the company discovered that a preference for cars made in the US was a great indication of a potential Trump voter. Among other things, these findings now showed Trump which messages worked best and where. The decision to focus on Michigan and Wisconsin in the final weeks of the campaign was made on the basis of data analysis. The candidate became the instrument for implementing a big data model.
But to what extent did psychometric methods influence the outcome of the election? When asked, Cambridge Analytica was unwilling to provide any proof of the effectiveness of its campaign. And it is quite possible that the question is impossible to answer.
And yet there are clues: There is the fact of the surprising rise of Ted Cruz during the primaries. Also there was an increased number of voters in rural areas. There was the decline in the number of African-American early votes. The fact that Trump spent so little money may also be explained by the effectiveness of personality-based advertising. As does the fact that he invested far more in digital than TV campaigning compared to Hillary Clinton. Facebook proved to be the ultimate weapon and the best election campaigner, as Nix explained, and as comments by several core Trump campaigners demonstrate.
Many voices have claimed that the statisticians lost the election because their predictions were so off the mark. But what if statisticians in fact helped win the election—but only those who were using the new method? It is an irony of history that Trump, who often grumbled about scientific research, used a highly scientific approach in his campaign.
Another big winner is Cambridge Analytica. Its board member Steve Bannon, former executive chair of the right-wing online newspaper Breitbart News, has been appointed as Donald Trump’s senior counselor and chief strategist. Whilst Cambridge Analytica is not willing to comment on alleged ongoing talks with UK Prime Minister Theresa May, Alexander Nix claims that he is building up his client base worldwide, and that he has received inquiries from Switzerland, Germany, and Australia. His company is currently touring European conferences showcasing their success in the United States. This year three core countries of the EU are facing elections with resurgent populist parties: France, Holland and Germany. The electoral successes come at an opportune time, as the company is readying for a push into commercial advertising.
Kosinski has observed all of this from his office at Stanford. Following the US election, the university is in turmoil. Kosinski is responding to developments with the sharpest weapon available to a researcher: a scientific analysis. Together with his research colleague Sandra Matz, he has conducted a series of tests, which will soon be published. The initial results are alarming: The study shows the effectiveness of personality targeting by showing that marketers can attract up to 63 percent more clicks and up to 1,400 more conversions in real-life advertising campaigns on Facebook when matching products and marketing messages to consumers’ personality characteristics. They further demonstrate the scalability of personality targeting by showing that the majority of Facebook Pages promoting products or brands are affected by personality and that large numbers of consumers can be accurately targeted based on a single Facebook Page.
In a statement after the German publication of this article, a Cambridge Analytica spokesperson said, “Cambridge Analytica does not use data from Facebook. It has had no dealings with Dr. Michal Kosinski. It does not subcontract research. It does not use the same methodology. Psychographics was hardly used at all. Cambridge Analytica did not engage in efforts to discourage any Americans from casting their vote in the presidential election. Its efforts were solely directed towards increasing the number of voters in the election.”
The world has been turned upside down. Great Britain is leaving the EU, Donald Trump is president of the United States of America. And in Stanford, Kosinski, who wanted to warn against the danger of using psychological targeting in a political setting, is once again receiving accusatory emails. “No,” says Kosinski, quietly and shaking his head. “This is not my fault. I did not build the bomb. I only showed that it exists.”
Hannes Grassegger and Mikael Krogerus are investigative journalists attached to the Swiss-based Das Magazin specialized journal. The original text appeared in the late December edition under the title: “I only showed that the bomb exists” (Ich habe nur gezeigt, dass es die Bombe gibt). This, English translation, is based on the subsequent January version, first published by the Motherboard magazine (titled: The Data That Turned the World Upside
Islamic State after ISIS: Colonies without Metropole or Cyber Activism?
With the world constantly following the events in the Middle East, much now depends on the shape, form and ‘policy’ Islamic State is going to take. What form will the IS take? What role will cryptocurrencies play in funding terrorists? How can Russia and the US cooperate in fighting mutual security threats? RIAC expert Tatyana Kanunnikova discusses these issues with Dr. Joseph Fitsanakis, Associate Professor of Political Science in the Intelligence and National Security Studies program at Coastal Carolina University.
Islamic State is perceived as an international threat. In which regions is it losing ground and in which ones is it on the rise? Could you please describe IS geography today?
Groups like the Islamic State are mobile. They tend to move and redeploy across international borders with relative ease, and are truly global in both outlook and reach. It is worth noting that, from a very early stage in its existence, the Islamic State incorporated into its administrative structure the so-called vilayets, namely semi-autonomous overseas provinces or possessions. These included parts of Libya, Afghanistan, Somalia, the Philippines, Nigeria, and of course Egypt’s Sinai Peninsula. By the first week of 2018, the Islamic State had all but eclipsed from its traditional base of the Levant. How has the loss of its administrative centers affected the organization’s strategy?
There are two competing answers to this question. The first possible answer is that ISIS’ plan is similar to that of the Great Britain in 1940, when the government of Winston Churchill was facing the prospect of invasion by the forces of the German Reich. London’s plan at the time was to use its overseas colonies as bases from which to continue to fight following a possible German takeover of the Britain. It is possible that ISIS’ strategy revolves around a similar plan —in which case we may see concerted flare-ups of insurgent activity in Egypt, Southeast Asia, Afghanistan, Somalia, Kenya, and elsewhere. The second possible answer to the question of ISIS’ strategy is that the group may be entering a period of relative dormancy, during which it will concentrate on cyber activism and online outreach aimed at young and disaffected youth in Western Europe, the Caucasus, and North America. According to this scenario, ISIS will use its formidable online dexterity to establish new communities of Millennial and Generation Z members, and renegotiate its strategy in light of the loss of physical lands in the Levant. This scenario envisages an online geography for the Islamic State, which may eventually lead to the emergence of a new model of activity. The latter will probably resemble al-Qaeda’s decentralized, cell-based model that focuses on sharp, decisive strikes at foreign targets.
Commenting for an article in Asia Times, you said that ISIS returnees are extremely valuables sources of intelligence. How can they be effectively identified in the flow of migrants? How exactly can security services exploit the experience of these militants?
In its essence, the Sunni insurgency is a demassified movement. By this I mean that its leaders have never intended for it to become a mass undertaking. The Islamic State, like al-Qaeda before it, does not depend on large numbers of followers. Rather it depends on individual mobilization. Senior Islamic State leaders like Abu Bakr al-Baghdadi, Mohamed Mahmoud, Tarad Muhammad al-Jarba, and others, have no interest in deploying 10,000 fighters who may be reluctant and weak-willed. They are content with 100 fighters who are unswerving in their commitment and prepared to devote everything to the struggle, including their lives. Consider some of the most formidable strikes of the Sunni insurgency against its enemies: the attacks of 9/11 in the United States, the 7/7 bombings in the United Kingdom, the November 2015 strikes in Paris, and the fall of Mosul in 2014. There have been more large-scale strikes on Russian, Lebanese, Afghan, Egyptian, and other targets. What connects all of those is the relatively small number of totally dedicated fighters that carried them out. The fall of Mosul, for example, which brought the Islamic State to the height of its power, was carried out by no more than 1,500 fighters, who took on two divisions of the Iraqi Army, numbering more than 30,000 troops.
The reliance on a small number of dedicated fighters mirrors the recruitment tactics of the Islamic State (and al-Qaeda before it). The latter rested on individual attention paid to selected young men, who are seen as reliable and steadfast. This is precisely the type of emphasis that should be placed by European, American and Russian security agencies on suspected members of terrorist groups that are captured, or are detected within largest groups of migrants. What is required here is individual attention given by security operatives who have an eye for detail and are knowledgeable of the culture, customs and ways of thinking of predominantly Muslim societies. However, most governments have neither the patience nor expertise to implement a truly demassified exploitation campaign that targets individuals with an eye to de-radicalization and — ultimately — exploitation. The experience of the Syrian migrants in countries like Italy and Greece is illustrative of this phenomenon. The two countries — already overwhelmed by domestic political problems and financial uncertainty — were left primarily to their own means by a disinterested and fragmented European Union. Several members of the EU, including Poland, Hungary, and the United Kingdom have for all practical purposes positioned themselves outside of the EU mainstream. At the same time, the United States, which is the main instigator of the current instability in the Middle East, shows no serious interest in de-radicalization and exploitation programs. This has been a consistent trend in Washington under the administrations of Barack Obama and Donald Trump.
In your opinion, will cryptocurrencies become a significant source of terrorism funding? Some experts believe that pressure on traditional methods of financing may facilitate this process.
In the old days of the 1970s and 1980s, most terrorist groups raised funds primarily through extortion, kidnappings, bank robberies and — to a lesser extent — drugs. Things have changed considerably in our century. Today, cryptocurrencies are not in themselves sources of funding — though it can be argued that the frequent rise in the value of many cryptocurrencies generates income for terrorist organizations — but more a method of circulating currency and providing services that generate funds. With the use of cryptocurrencies and the so-called Darknet, terrorist organizations are now able to engage in creative means of generating cash. They include the sale of pirated music, movies and, most of all, videogames. They also engage in the sale of counterfeit products, including clothing, electronics and other hi-tech accessories. Additionally, they sell counterfeit pharmaceutical products and even counterfeit tickets to high-profile sports events and music concerts. Those who buy those products often pay for them using cryptocurrencies, primarily through the Darknet. Looking at the broad picture, it is clear that the use of cryptocurrencies constitutes a form of asymmetric finance that circumvents established financial structures and operates using irregular means that for now remain largely undetected. Few terrorist groups will resist the temptation to employ this new method of unregulated financial transaction.
How can Russian and US intelligence and security services cooperate in combating terrorism? In December 2017, media reported that CIA had helped its Russian counterpart foil a terror attack in St. Petersburg. What should be done to deepen and broaden such kind of cooperation?
Despite friction on the political level, cooperation between Russian and American intelligence agencies in the field of counter-terrorism is far more routine than is generally presumed. Last December’s report of the CIA sharing intelligence with its Russian counterpart was notable in that it was publicly disclosed. Most instances of intelligence cooperation between Washington and Moscow are not publicized. In February 2016, the then CIA director John Brennan stated publicly that the CIA works closely with the Russian intelligence community in counter-terrorism operations directed against Islamist militants. He described the CIA’s relationship with Russian intelligence officials as a “very factual, informative exchange.” He added that “if the CIA gets information about threats to Russian citizens or diplomats, we will share it with the Russians”. And he added: “they do the same with us”. Brennan gave the example of the 2014 Winter Olympics in Sochi, Russia. He said: “We worked very closely with [Russian intelligence agencies]” during the Sochi games to “try to prevent terrorist attacks. And we did so very successfully”. There is no reason to doubt the sincerity of Brennan’s statement.
Professionals are always more likely to find common areas of interest. So, in what areas, apart from combating terrorism, can Russian and US intelligence services cooperate?
There is a virtually endless list of common concerns that ought to and often do bring together American and Russian intelligence agencies. To begin with, there are two major existential threats to the security of both countries and the whole world that demand close cooperation between Washington and Moscow and their respective intelligence agencies. The first threat is the black market in weapons of mass destruction, notably chemical weapons, biological agents, and even radioactive material. In the past 20 years, there have been several cases of individuals or groups trying to sell or trade radioactive substances. The fear of such weapons possibly falling into the hands of non-state insurgents should be sufficient to entice close cooperation between American and Russian intelligence agencies. The second existential threat is that of global warming and its effects on international security. It is no secret that the rise in global temperatures is already having a measurable negative impact on food production, desertification, sea-level rise, and other factors that contribute to the destabilization of the economies of entire regions. Such trends fuel militancy, political extremism, wars, and mass migrations of populations, all of which are serious threats to the stability of the international system. Solving this global problem will require increased and prolonged cooperation on the political, economic, and security/intelligence level between the United States and Russia. The two countries must also work closely on a series of other topics, including standardizing the global regulation of cryptocurrencies, diffusing tensions between the two rival nuclear powers of India and Pakistan, tackling the tensions in the Korean Peninsula, preventing the destabilization of Egypt (the world’s largest Arab country), combating the growth of Sunni militancy in West Africa, and numerous other issues.
Among other things, you are an expert in the Cold War. At present, Russia and the USA are experiencing a period of tensions in their relationship. In your opinion, what should be done in order to overcome these challenges and mend fences?
For those of us who remember the Cold War, and have studied the development of Russia–US relations in the postwar era, the current state of affairs between Washington and Moscow seems comparatively manageable. Despite tensions between Washington and Moscow, we are, thankfully, very far from an emergency of the type of the Berlin Crisis of 1961, the Cuban Missile Crisis, or even the 1984 collision between the US plane carrier Kitty Hawk and the K-314 Soviet submarine in the Sea of Japan. How do we avoid such dangerous escalations? The answer is simple: regularize communications between the two countries on various levels, including executive, political, economic and security-related. Such communications should continue even or, arguably, especially at times of rising tensions between the two nations. The overall context of this approach rests on the indisputable truth that Russia and the United States are the two central pillars on which the idea of world peace can be built for future generations.
First published in our partner RIAC
*Dr. Joseph Fitsanakis is Associate Professor of Political Science in the Intelligence and National Security Studies program at Coastal Carolina University. Prior to joining Coastal, he built the Security and Intelligence Studies program at King University, where he also directed the King Institute for Security and Intelligence Studies. An award-winning professor, Dr. Fitsanakis has lectured, taught and written extensively on subjects such as international security, intelligence, cyberespionage, and transnational crime. He is a syndicated columnist and frequent contributor to news media such as BBC television and radio, ABC Radio, Newsweek, and Sputnik, and his work has been referenced in outlets including The Washington Post, Foreign Policy, Politico, and The Huffington Post. Fitsanakis is also deputy director of the European Intelligence Academy and senior editor at intelNews.org, a scholarly blog that is cataloged through the United States Library of Congress, and a syndicated columnist.
How security decisions go wrong?
Information warfare is primarily a construct of a ‘war mindset’. However, the development of information operations from it has meant that the concepts have been transferred from military to civilian affairs. The contemporary involvement between the media, the military, and the media in the contemporary world of the ‘War on Terrorism’ has meant the distinction between war and peace is difficult to make. However, below the application of deception in the military context is described but it must be added that the dividing line is blurred.
The correct control of security often depends on decisions under uncertainty. Using quantified information about risk, one may hope to achieve more precise control by making better decisions.
Security is both a normative and descriptive problem. We would like to normatively how to make correct decisions about security, but also descriptively understand follow where security decisions may go wrong. According to Schneider, security risk is both a subjective feeling and an objective reality, and sometimes those two views are different so that we fail acting correctly. Assuming that people act on perceived rather than actual risks, we will sometimes do things we should avoid, and sometimes fail to act like we should. In security, people may both feel secure when they are not, and feel insecure when they are actually secure. With the recent attempts in security that aim to quantifying security properties, also known as security metrics, I am interested in how to achieve correct metrics that can help a decision-maker control security. But would successful quantification be the end of the story?
The aim of this note is to explore the potential difference between correct and actual security decisions when people are supposed to decide and act based on quantified information about risky options. If there is a gap between correct and actual decisions, how can we begin to model and characterize it? How large is it, and where can someone maybe exploit it? What can be done to fix and close it? As a specific example, this note considers the impact of using risk as security metric for decision-making in security. The motivation to use risk is two-fold. First, risk is a well-established concept that has been applied in numerous ways to understand information security and often assumed as a good metric. Second, I believe that it is currently the only well-developed reasonable candidate that aims to involve two necessary aspects when it comes to the control of operational security: asset value and threat uncertainty. Good information security is often seen as risk management, which will depend on methods to assess those risks correctly. However, this work examines potential threats and shortcomings concerning the usability of correctly quantified risk for security decisions.
I consider a system that a decision-maker needs to protect in an environment with uncertain threats. Furthermore, I also assume that the decision-maker wants to maximize some kind of security utility (the utility of security controls available) when making decisions regarding to different security controls. These different parts of the model vary greatly between different scenarios and little can be done to model detailed security decisions in general. Still, we think that this is an appropriate framework to understand the need of security metrics. One way, maybe often the standard way, to view security as a decision problem is that threats arise in the system and environment, and that the decision-maker needs to take care of those threats with available information, using some appropriate cost-benefit tradeoff. However, this common view overlooks threats with faults that are made by the decision-maker. I believe that many security failures should be seen in the light of limits (or potential faults) of the decision-maker when she, with best intentions, attempts to achieve security goals (maximizing security utility) by deciding between different security options.
I loosely think of correct decisions as maximization of utility, in a way to be specified later.
Information security is increasingly seen as not only fulfillment of Confidentiality, Integrity and Availability, but as protecting against a number of threats having by doing correct economic tradeoffs. A growing research into the economics of information security during the last decade aims to understand security problems in terms of economic factors and incentives among agents making decisions about security, typically assumed to aim at maximizing their utility. Such analysis is made by treating economic factors as equally important in explaining security problems as properties inherent in the systems that are to be protected. It is thus natural to view the control of security as a sequence of decisions that have to be made as new information appears about an uncertain threat environment. Seen in the light of this and that obtaining security information usually in it is cost, I think that any usage of security metrics must be related to allowing more rational decisions with respect to security. It is in this way I consider security metrics and decisions in the following.
The basic way to understand any decision-making situation is to consider which kind of information the decision-maker will have available to form the basis of judgments. For people, both the available information, but also potentially the way in which it is framed (presented), may affect how well decisions will be made to ensure goals. One of the common requirements on security metrics is that they should be able to guide decisions and actions to reach security goals. However, it is an open question how to make a security metric usable and ensuring such usage will be correct (with respect to achieving goals) comes with challenges. The idea to use quantified risk as a metric for decisions can be split up into two steps. First do objective risk analysis using both assessment of system vulnerabilities and available threats in order to measure security risk. Second, present these results in a usable way so that the decision-maker can make correct and rational decisions.
While both of these steps present considerable challenges to using good security metrics, I consider why decisions using quantified security risk as a metric may go wrong in the second step. Lacking information about security properties of a system clearly limits the security decisions, but I fear that introducing metrics do not necessarily improve them;this may be due to 1) that information is incorrect or imprecise, or 2) that usage will be incorrect. This work takes the second view and we argue that even with perfect risk assessment, it may not be obvious that security decisions will always improve. I am thus seeking properties in risky decision problems that actually predict the overall goal – maximizing utility – to be, or not to be, fulfilled. More specifically, we need to find properties in quantifications that may put decision-making at risk of going wrong.
The way to understand where security decisions go wrong is by using how people are predicted to act on perceived rather than actual risk. I thus need to use both normative and descriptive models of decision-making under risk. For normative decisions, I use the well-established economic principle of maximizing expected utility. But for the descriptive part, I note that decision faults on risky decisions not only happen in various situations, but have remarkably been shown to happen systematically describe by models from behavioral economics.
I have considered when quantified risk is being used by people making security decisions. An exploration of the parameter space in two simple problems showed that results from behavioral economics may have impact on the usability of quantitative risk methods. The results visualized do not lend themselves to easy and intuitive explanations, but I view my results as a first systematic step towards understanding security problems with quantitative information.
There have been many proposals to quantify risk for information security, mostly in order to allow better security decisions. But a blind belief in quantification itself seems unwise, even if it is made correctly. Behavioral economics shows systematic deviations of weighting when people act on explicit risk. This is likely to threaten security and its goals as security is increasingly seen as the management of economical trade-offs. I think that these findings can be used partially to predict or understand wrong security decisions depending on risk information. Furthermore, this motivates the study how strategic agents may manipulate, or attack, the perception of a risky decision.
Even though any descriptive model of human decision-making is approximate at best, I still believe this work gives a well-articulated argument regarding threats with using explicit risk as security metric. My approach may also be understood in terms of standard system specification and threat models: economic rationality in this case is the specification, and the threat depends on bias for risk information. I also studied a way of correcting the problem with reframing for two simple security decision scenarios, but only got partial predictive support for fixing problems this way. Furthermore, I have not found such numerical examinations in behavioral economics to date.
Further work on this topic needs to empirically confirm or reject these predictions and study to which degree they occur (even though previous work clearly makes the hypothesis clearly plausible at least to some degree) in a security context. Furthermore, I think that similar issues may also arise with several forms of quantified information for security decisions.
These questions may also be extended to consider several self-interested parties. in game-theoretical situations. Another topic is using different utility functions, and where it may be normative to be economically risk-aversive rather than risk-neutral. With respect to the problems outlined, rational decision-making is a natural way to understand and motivate the control of security and requirements on security metrics. But when selecting the format of information, a problem is also partially about usability. Usability faults often turn into security problems, which is also likely for quantified risk. In the end the challenge is to provide users with usable security information, and even more broadly investigate what kind of support is required for decisions. This is clearly a topic for further research since introducing quantified risk is not without problems. Using knowledge from economics and psychology seems necessary to understand the correct control of security.
Cyberspace: A Manmade Sphere for Wars
Internet can be considered as one of the greatest achievements of humanity of the last century, which connected the entire world. It created a new space for connections, information and communications, as well as cooperation. Thus, it created also a new platform for conflicts which involved not only individuals but also states. The invention of the twentieth century, the internet, has become another sphere for international relations, and a new space for defensive and offensive policies for regulating and balancing those affairs. The space called cyberspace has become a platform for interactions not only between individuals, but also between states. The interactions on their side were not only developed in a positive manner, but were also transformed into attacks, which pose a real threat to the security of states. Thus, the following questions arise:
Can cyberspace be considered a new sphere for war? Can conflicts and offensive and defensive operations in cyberspace be considered a real war?
The aim of this article is to specify offensive and defensive actions occurring in cyberspace and to explain the differences and similarities between them and the classical approach to war present in other spheres: land, water, air, and space. Despite the overgrowth of offensive interactions in cyberspace and defensive strategies for enriching the cyber arsenal of states, military specialists have concerns over the reality of cyberwars in general. Parallels are drawn to show the similarities and differences between definitions and perceptions of war, and whether concepts from the classical approach can be transferred to describe wars in the cyber sphere. This research puts cyberwars in line with other wars, thus analyzing their peculiarities, whilst Cyberspace is seen as another sphere for war and international relations in addition to the existing spheres of land, water, air, and space
Internet’s Two Sides of the Coin: From Good to Threat
The Internet that we use today, is based on the Transmission Control Protocol or just Internet Protocol commenced in 1973. The network became operational in January 1983. For the first two decades of its existence, it was the preserve of a technological, academic, and research elite. From the early 1990s, it began to percolate into mainstream society and is widely regarded as a General-Purpose Technology (GPT) without which modern society could not function.
Only half a century ago it was difficult to imagine that human interactions would be developed in a manmade sphere, totally virtual and artificial. It must have been impossible to imagine that it would penetrate our lives so closely that it would cover everyday life, from communication and information sharing to purchasing products and regulating temperature at home.
Now internet has connected the entire world breaking the land borders which lined geographically differentiating the places people live. It substituted land borders with digital ones, making it possible to connect the entire world into one sphere.
With the start of the World Wide Web in 1993, the greatest accretion of communication came into existence. Since then, information being secret for a limited groups or organizations that were historically used for military purposes as an intellectual advantage, soon became available for masses.
Moreover, equal access to information for all, one of the ultimate achievements of humanity and one of the supreme advantages of the internet, has started to provide information not only for good will, having also provoked irregular warfare.
These chaotic interactions, which Garnett called “fourth generation warfare” (4GW), through networks would become a wave of social reactions and pressure that would provide an opportunity for an asymmetric warfare. The tendency is obviously dangerous since not only states possess these “digital” weapons but also non-state actors including terrorist networks. Basically, the Internet allows anyone to join digitally and to be a force or power that could have a significant impact on states’ policies.
The sphere were those actions take place with the usage or within the system of information and communication technologies is broadly named cyberspace and the actions that take place in this sphere get their terminology accordingly; cyber-attacks, cyberwar, etc… Though states have various definitions of a cyberspace and with the scope it covers, it is meant to be a non-physical Information and telecommunication technologies environment (ICT).The term cybersecurity has been emerging from the US since the mid-1990s, which later have become widely used in other countries and international organizations such as United Nations (UN), Organization for Security and Co-operation in Europe (OSCE), Organization for Economic Co-operation and Development(OECD), North-Atlantic Treaty Organization (NATO), the Council of Europe(CE), BRICS, Shanghai Cooperation Organization (SCO) and many others.
A cyber-attack is not an end in itself, but a powerful means to a wide variety of ends, from propaganda to espionage, from denial of services to the destruction of critical infrastructure.
From the prism of threat, they may cause, cyberattacks can be implemented using methods, such as malicious programs, that can penetrate systems of specific or not specified group of people or entities causing dysfunctions of computer operations, stealing personal information, phishing stealing passwords of the user as well as infecting computer systems to slow down specific processes, etc. In current internet-run infrastructure a single penetration can be fatal for a society and become a threat for a state. A penetration into the command-control system of critical infrastructures, for example, can cut the supply of energy, change the chemical construction of water thus making it poisoned, etc. and the anonymity can stand as an advantage as cyberattacks are still not attributable through international humanitarian law. Moreover, in a cyber conflict, the terrestrial distance between adversaries can be irrelevant so cyber weapon can reach its target much beyond its borders.
The advance of technology made it possible to give room for clashes between States and non-states actors involved in operations in cyberspace. These clashes have become a real threat for international security. As compared with kinetic weapons that are relatively expensive to obtain, as well as possible to detect their origin, malicious programs are available to download or buy and even create if there is a good specialist of it: even a teenager can formulate it.
Therefore, it is becoming nearly impossible to patrol all the purchase and supply chain of the cyber arsenal. Malicious viruses or programs can penetrate various computer systems of public and private usage and cause dysfunctions, changing the primary command-control systems, slowing their base speed of operation and causing very costly problems for state security.
Per media reports, the group which rampaged through and besieged part of Mumbai in November 2008 made use of readily available cellular and satellite phones, as well as overhead imagery from Google Earth, to coordinate and plan their attack.
However, this invention is an issue of arguments among scientist from the prism of war definition.
Theoretical Dilemma of cyberwars and cyber reality
Despite different conflicts occurring in cyberspace between state and non- state actors, state-sponsored operations, and developments in international relations, military specialists argue about the exact definition of cyberspace, whether to evaluate it as real war or not, and as whether to count operations in cyberspace as a real war between parties involved.
Various conflicts in cyberspace including attacks of regular and irregular origin performing symmetric or asymmetric tactic, do not correspond with the classical approach of the war including only some or one or even missing any aspect of the war characterization. Despite of the current actions and bilateral, multilateral etc., agreements signed by states and international organizations, associations on the cybersecurity issues and despite of the threats the world overcomes or will overcome in cyberspace, theorists have certain disbelieves while defining or accepting cyberspace as a new sphere for wars as well as cyberwars as already occurring facts.
The issue is that there had not been a single verifiable case of cyber terrorism nor has there been any human casualty caused by cyber-attacks, giving grounds for disbelief.
Thomas Rid a specialist of war, is among those scientists and experts who see debates about cyber wars exaggerated, moreover, he expresses mistrusts related to cyberspace as a new space for war in a classical approach of war definition. He believes that “Cyber war has never happened in the past, it is not occurring on the present and it is highly unlikely that will disturb the future.”
The fact that computer and internet assisted attacked may penetrate the operating systems of targets stealing data or causing dysfunction of potentiality of operations Rid, however, in this respect differentiates between sabotage operations and direct physical harm.
Rid refers to Carl von Clausewitz, a nineteenth-century Prussian military theorist, who defines war according to three criteria, “First, all acts of war are violent or potentially violent. Second, an act of war is always instrumental: physical violence or the threat of force is a means to compel the enemy to accept the attacker’s will. Finally, to qualify as an act of war, an attack must have political goal or intention.”
Theoretical description of war through centuries might have changed its primary strategies and instruments, while his goal is always the same. Within this respect, it is important to observe this definition on a broad way: Of course, computer warm or virus cannot kill directly a person, like it could have a sword, but it can cut the energy supply of a hospital causing a chain of violence, or it can penetrate the command control of the Airplane system and change the direction of the plane or to cause and a catastrophe.
In contrary to classical approach of war, the reality of cyber war is supported by those who believe that cyber wars have already occurred, are occurring and will, possibly, continue to occur in future, thus cyber strategies must be implemented.
In July 2016, Allies reaffirmed NATO’s defensive mandate and recognized cyberspace as a domain of operations in which NATO must defend itself as effectively as it does in the air, on land and at sea.
Former U.S. President Obama speaking about cybersecurity mentioned:
“America’s economic prosperity, national security, and our individual liberties depend on our commitment to securing cyberspace and maintaining an open, interoperable, secure, and reliable Internet. Our critical infrastructure continues to be at risk from threats in cyberspace, and our economy is harmed by the theft of our intellectual property. Although the threats are serious and they constantly evolve, I believe that if we address them effectively, we can ensure that the Internet remains an engine for economic growth and a platform for the free exchange of ideas”.
Thomas Reed, a former staffer on the US National Security Council argues that Cyber wars are even new. They occurred in past, in Cold War Era, and had devastating results. As an example, he mentions about the first ever cyber-attack- a massive pipeline explosion in the Soviet Union in June 1982, counting as the most violent cyber-attack ever. “According to Reed, a covert US operation used rigged software to engineer a massive explosion in the Urengoy-Surgut-Chelyabinsk pipeline, which connected Siberian natural gas fields to Europe. Reed claims that Central Intelligence Agency (CIA) managed to insert malicious code into the software that controlled the pipeline’s pumps and valves. The rigger valves supposedly resulted in an explosion that the US Air Force rated at three kilotons, equivalent to the force of a small nuclear device.”
Although, neither there is a factual evidence of accident being a cyber-attack confirmed or supported by the official U.S, nor there are any Soviet media reports from 1983 also confirming that Reed’s mentioned explosion took place. Though Soviet Union media regularly reported about accidents and pipeline explosions at the time. In case of cyber-attacks, it is not an easy task to investigate fully and in a short period of time. Forensic examination is needed which presupposes experts and conditions for objective examination. Under the condition of Cold war, the parties would hardly agree to do such an investigation which will reveal secrets about their technical capabilities and the real cause of the explosion. Incase Reed’s claims are true, then the massive violence it could have done would theoretically rank cyber weapons among extremely dangerous means and cyber wars would have been defined accordingly.
Another example that speaks about possible cyberattack that will “suit” to the description of war can be considered the 2008th cyberattacks on Georgian most prominent websites, including those of the country’s national bank and the Ministry of Foreign Affairs. In August 2008, in the period of the military conflict over South Ossetia, Georgian Government blamed the Kremlin, but Russia denied sponsoring the attackers, and later NATO investigation found no conclusive “proof” of who had carried them out. The fact that the “proof” is not found can illustrate two possible judgments: first, the attacker is technically equipped well enough so it is hard to distinguish him, second: the attack was not carried out by a potential suspect. However, the situation can be judged by the following viewpoint: you are innocent unless your guilt is proved. And because the anonymity is a priority in cyber wars, so it is highly efficient especially for states to use it in hybrid war strategies.
In cyberspace the sides, that are involved in the attacks or counterattacks can be distinguished only in two ways: first, by their own wish (which may occur rarely, or even impossible to happen especially when attacks are carried out by States rather than other subjects) or, according to the evidence. The last one is directly connected with the technical capabilities of an attacker as well as technical competences of an attacked side to be able to detect.
According to Oleg Demidov, a Cybersecurity expert at the Russian Centre for Policy studies (PIR Center), the overview of the NATO experts suspecting Russia in attacking Estonian infrastructure in 2007, Georgian government and private sector networks in 2008, and U.S. financial institutions and private companies in 2014 Spring, as not fundamental, because there was no practical evidence of the proof of the attacker, or lack of technical capabilities to be able to define the source of the attacker.
In his contribution “Global Internet Governance and International Security in The Field of ICT Use”, Demidov stresses high possibility and risk of an international conflict between nuclear-weapon states. As he mentioned;
“In the event of lighting-fats cyber-attack that imitates the ‘signature’ of Russian perpetrators (for example, Cyrillic code fragments and other linguistic patters) and targets the infrastructure of NATO countries using servers in Russian territory, there is a risk of NATO military retaliation against Russia. In accordance with NATO doctrine, retaliatory measures may include the use of kinetic weapons and the involvement of all NATO members in a retaliatory strike”.
These two cyber incidents- the Georgian cyber-attacks and Estonian cyberattacks, are regarded by the U.S. and other Western nations as causes for great attention and much reflection.
Estonian cyber incidents were followed by the establishment of cyber strategies for national and system level for EU members and partners.
Particularly, in 2008, a year after the attacks, NATO set up the Cooperative Cyber Defense Centre of Excellence (CCD COE) in Tallinn. The military-defense usage of Information and Communication Technology (ICT) is one of the main purposes of the center. The center is technically equipped well enough to protect its members by providing technical support and human resource to protect internet infrastructure.
Another well-known and destructive cyber program that processed a worldwide discussion over the reality of cyber wars is the “Operation Olympic Games”, a large operation, that included the “development, testing, and use of malware against specific targets to collect information about the Iranian Nuclear program, as well as to sabotage it and slow it down as much as possible. It included such malware as Stuxnet, Duqu, Flame, and Guass (all of them targeting special operation for espionage and sabotage), active in between 2007-2013.The US presidential administration and Israeli secret services have been named as perpetrators.
Ex-head of the Foreign Relations Committee of Iran’s Supreme National Security Council Seyed Hossein Mousavian, in his “The Iranian Nuclear Crisis: memoir confirms Stuxnet as a malicious computer warm developed to target the computer system that control Iran’s huge enrichment plant at Natanz. Moreover, according to Mousavian, Ali Akbar Salehi, Iran’s Representative to the International Atomic Energy Organization (IAEA) at that time confirmed that Iran was experiencing espionage at its nuclear plants. According to the IAEA, there was a big decrease in the amount of the operating centrifuges caused by the Stuxnet with a vivid decline to more than 100 – from 4920 in May 2009 to 3772 in August 2010. Despite of the Fact that Ahmadinejad mentioned about the problems directly related to the computer software, installed by the spies to slow down centrifuge’s operation, nevertheless, Mousavian does not think that this could have cause a big problem and an obstacle for enriching the centrifuges.
In fact, Stuxnet did affect the nuclear enrichment system, and did make problems for Iran’s nuclear program. The computer worm was operating inside the system for quite a long time unnoticed, slowing down the operational capabilities of both experts and technical equipment. If we note the fact that it successfully slowed down the system’s operation, then we can conclude that operations reached a certain level much later then they could have without the worm Now that sanctions have hit Iran’s economy and forced it to make concessions, we can conclude that the situation would have been different if Stuxnet had not affected Iranian programs; Iran would have finished its program faster, before sanctions could devastate its economy. But since Iran discovered the problem much later and the whole process was slowly altered by the worm, we can see that Stuxnet led to a longer timeframe for enrichment, and subsequently longer terms for sanctions.
The action brought not only psychological damage, as would be named and labeled by Israel and U.S. specialist, but it brought also to economical, technical(human resources as well as technical capabilities) crises.
According to M. Sahakyan, an Armenian researcher.
“…sanctions were hard and maybe they were the main reason why Iran agreed to the Interim agreement. Though Iranian leaders like to mention that sanctions were not problem, but the Iranian economy had been effectively hit hard by these sanctions. Iranian economy mostly declined when EU member states imposed an oil embargo on Iran. China also reduced its average oil import levels from Iran in a disagreement on Iran’s nuclear program. The depreciation of Iranian Rial, reduction of oil exports and shortages of foreign currency created hard social-economic situation in Iran. So sanctions were hard and maybe they were the main reason why Iran agreed to the Interim agreement.”
It is evident that, not directly but indirectly cyber war may influence politics of a specific State. Today cyber-attacks can target political leadership, military systems, and average citizens anywhere in the world, during peacetime or war, with the added benefit of attacker anonymity.
Stuxnet influenced the Iranians’ centrifuges, causing them to overload an intelligence program. This is a new type of and reason for war. While the basic definition of war presupposes physical violence, Stuxnet presupposes a psychological intent. In addition to the technical harm it did, it also influenced the psychology of those who had encountered the undiscovered cyber worm. Regarding the first, undiscovered phase of the computer worm, imagine a specialist working on the program, who faced long-lasting technical problems, becoming filled with doubt towards their personal professional skills and also doubting the capability of Iran in general to develop its program. This is a new approach in the definition of war, as it dramatically shifts the choice of instruments that can cause harm to a State.
From Wars with Swords to Cyber Wars: State Security is Still a Priority
Nevertheless, the war in cyberspace is real, it has happened in the past, it is happening now and it will certainly happen in future.
The classical approach to war sees physical violence carried out by military operations. Cyberwar presupposes physical violence as well as bringing a new, psychological violence, which may cause no less harm. Ideas and things important for state security have changed over the centuries, as have the instruments and measurements of security, but the problem of state security is still a priority. Maybe unexpected ships won’t attack from the sea, but cyber-attacks will come.
In past centuries, population size was an important issue for the state in maintaining its governance. It determined the size of the workforce and the size of the army, and the strength of armies was measured by the quantity of troops.
Centuries ago, a human, a good soldier was to aim to harm the opposing side. To conquer the army was to win the war. Afterwards, the period of weapons and technology began, and would enable opposing sides measure their technical and tactical capabilities to win. At that time, to mobilize technical capabilities was to conquer the army. Due to growing population and technological achievements, in addition to the number of troops, now the amount of military equipment is of much importance. A single-pilot jet may cause greater harm than 1000 troops on the same territory. Nowadays unmanned aircraft can jeopardize enemies’ strategic targets in specific cases even without any physical violence, because in a certain situation to harm a strategical unit even without causing physical violence from neither attaching side nor from the attacked still may have fatal result for the states being attacked.
In current stage, the military parades mostly demonstrating technical capability of a certain state, will alarm a possible harm while attack or attacking. Aside from the traditional military spheres like land, sea, and air (added later),an epoch of adding a new sphere, cyberspace, has begun, in which technical capabilities do no less harm than in a traditional war. One of the ultimate advantage of cyberwars is the anonymity of the attacker, which makes it a reasonable choice for state’s foreign policy.
In addition to the traditionally distinguished types of harm for a state security, cyberwar brings the conception of psychological trauma for the sates making it doubt its capabilities on a certain level. In the case of Stuxnet, the attack was “emotional” and technical.
The definition of the emotional damage through cyberwars was used to describe Russia’s so-called internet interference in 2016. “The New Yorker” expresses viewpoints of national-security officials who believed that those series of cyber hackings were directed to destabilize the conception of democracy in the States.
For many national-security officials, the e-mail hacks were part of a larger, and deeply troubling, picture: Putin’s desire to damage American confidence and to undermine the Western alliances—diplomatic, financial, and military—that have shaped the postwar world.
To technically dysfunction a system just causing a technical harm is a small incident, while targeting CI with technically destabilizing them already has grown into a political scandal.
In turn, cyberattack may cause harm on a specific target without involving other sides especially in case of state sponsored attacks, as it remains undiscovered for a while and the stereotypes and cliché of the traditional war definition will empower the attacker to have “excuses” for the attack. Cyberwars will become more dangerous, if not included and named as war and not struggled as traditional wars.
Cyber Arm race has started
Despite of the distrust and interpretation of cyberwars within the framework of classical approach of war, states are accelerating cyber arms race. This development has several political and strategic implications that pose the need to find specifically political answers. What is often forgotten or neglected is the increasing importance of understanding cyberspace as a political domain and cyber politics is needed more than ever before.
While experts are debating over the exact description and definition of cyberwar, States are enriching their State defensive arsenal with cyber equipment and technical staff for better governance in cyberspace, as well as regulations and doctrines that will define the strategy for the defensive and offensive operations for ICT threat.
In November 2011, the Department of Defense of the U.S. issued a report to Congress confirming, that it was ready to add cyberspace to sea, land, air, and space as the latest domain of warfare – the military would, if necessary, use force to protect the nation from cyberattacks. This statement shape the interactions in cyberspace on the same level with other spheres making them equally important and in case of need, changeable and cooperative.
By this, next to the traditional war spheres: ground, sea, air, space, a new battlefield-the cyberspace is differentiated.
With the technological developments, nearly every aspect of our lives is technically run, so it becomes very sensitive to any cyberattack, since any non-functioning in a technical field may cause human harm, economic harm, and be a serious problem for the entire National security. In this regard, the former Secretary of Homeland Security of the U.S.Jeh Johnson at The White House Cybersecurity Framework Event on February 12, 2014, specifying the seriousness of the cyberattacks on electrical substations specifically, mentioned:
“What the public needs to understand is that today the disruption of a critical public service like an electrical substation need not occur with guns and knives. A cyberattack could cause similar, and in some cases far greater, damage by taking several facilities offline simultaneously, and potentially leaving millions of Americans in the dark”.
The focus was on the electrical substations but it may refer to other sectors too: telecommunication, hospitals, libraries and federal departments courts and prisons. Any entity, that is functioning with technology may be in a real attack risk.
The technological developments of the last century bring the automated industrial control systems as well as most Critical Infrastructure (CI), the list of which may vary from state to state but have similarities, under possible cyber-attack which may be fatal for national defense. The range of facilities on the list of CIsmay include but not limited to nuclear industry, electricity, telecommunication, water supply, transport system on ground, sea and air, governmental buildings and their communication facilities, the financial and banking system, healthcare and defensive facilities etc. In 2017, the USA Department of Homeland security announced about its decision to include also election infrastructures into the list of Critically Important infrastructure for the State.
The cyber- defensive policy of states becomes an urgent issue and States are engaged in implementing special cybersecurity projects on national level to defend the CI of their countries.
Many states, for instance the U.S., Russia, China, Germany, UK, France etc. are enriching their cyber arsenals and developing cyber security system for defensive operations for their countries. Not only states are engaged in national mechanisms but they also are involved in developing global cooperative platforms for better and clean cyber environment of the World. Specifically, it would be interesting to mention U.S. Russia, China cyber triangle and their input of cyberspace as a significant priority for a State development and Security. The countries are involved in various discussions and cooperation agreements to maintain cooperation and peace in cyberspace globally. Despite of ideological differences in cyberspace and the attitudes of maintaining the policy for it, however these three cyber powers found a common ground for mutual understanding and possible fundamental cooperation. United Nations (UN) Governmental Group of Experts is one of the examples of that which is currently the only platform that has united the U.S. Russia, and China with commonly acceptable norms and suggestions. Since the scope of interests in cyberspace includes all groupings of society including governmental and federal entities private and public sectors as well as common citizens on a national level, private supra-powers regulation beyond borders and being responsible for larger audiences, there is an urgent need to focus on cooperation and establishment of fundamental rights in cyberspace as well as mechanism to establish security in this sphere.
Can a cyber-attack pose a serious threat to national security?
With the clear majority of undergone, ongoing and possible cyberattacks and with the current defensive strategy of the states, the cyberwar is nothing than a real threat for states’ national security as well as private sector. It enflames not only regular warfare which can cause as much harm as it is assumed to have by traditional approach of the war, it may also provoke irregular warfare with the privilege of the equal information access and anonymity. The technological invention of twentieth century may considered to be a disaster along with such scientific invention as atomic energy. It may give a good, but it may harm severely.
The difficulty of cyberwar falls also on the lack of common norms and definitions as well as specifically composed legislation equally acceptable for all states for peaceful and collaborative regulations of problematic issues on this field.
I do believe that cooperation on this issue is of great importance. Joint legislation, understanding and definition of conceptual ideas, common cooperative grounds will bring to a better and secure life, eliminating or declining the possibility of occurring private or non-state organizational subjects to be involved in irregular warfare destabilizing the peaceful cooperation of states and people on internet sphere for a good and productive will. The classical approach of war definition should be able to include a new sphere of violence before a certain violence occurs rather than defining right after it occurs, as mostly happens in historical approach. Aside from the traditional military spheres like land, sea, and air (added later),an epoch of adding a new sphere, cyberspace, has begun, in which technical capabilities do no less harm than in a traditional war.
Cybersecurity is an urgent, necessary strategy, which will lead to a secure sphere for cooperation, free and secure access to and sharing of information, and, due to its technical capabilities, to a more comfortable and economically developed way of life.
While Cybersecurity is an issue for the whole world, strategies for the development of cybersecurity may vary from state to state, in some cases occurring a national level, while in others limited to certain federal entities.
I believe that Cyberspace is very much like the environment; it is a digital environment, and just as a virus that penetrates a certain country is spread worldwide if not stopped, so is a computer virus. Just as pollution in one part of the world pollutes air or water that we all share, a cyberattack may cause a global problem. Networking, sharing information, and a global security approach are musts for a safe and productive global cyber environment and maintenance of all roads for better digital development for the sake of humanity.
(*) This essay adapted from the article Cyberspace – A Manmade Sphere for Wars, (21-st Century, N.1, 2017, pp.42-58). Used by permission. All rights reserved.
Take a pre-cruise vacation and ‘live like a local’ on Florida’s Space Coast
There’s something magical about taking a cruise. Is it the open ocean? The indescribable feeling of warm sea air blowing...
Islamic State after ISIS: Colonies without Metropole or Cyber Activism?
With the world constantly following the events in the Middle East, much now depends on the shape, form and ‘policy’...
5 ways the United Kingdom is leading the fight against plastic pollution
We’re only two months into 2018, but this year has already seen a number of concrete steps to combat plastic...
West Karoun: fields with promise for Iran’s oil industry
In the last few years, especially after the implementation of the nuclear deal (known as the Joint Comprehensive Plan of...
Helping Armenia Thrive
Despite being a landlocked country with few natural resources, Armenia has come a long way since independence in 1991, with...
Over 1,200 Migrant Children Deaths Recorded Since 2014, True Number Likely ‘Much Higher’
In 2015, a photo of a Syrian boy found dead on a beach in Turkey after attempting to reach Greece...
Three steps to end discrimination of migrant workers and improve their health
Authors: Afsar Syed Mohammad and Margherita Licata When migrant workers leave their home, many encounter abuse and violence on their...
Eastern Europe4 days ago
Expanding regional rivalries: Saudi Arabia and Iran battle it out in Azerbaijan
Terrorism5 days ago
Another Face of Abu Qatada: Speaking on the Principle of Terrorism
Intelligence3 days ago
How security decisions go wrong?
Americas3 days ago
‘Guns Don’t Kill People, People Kill People’: Time to retire
Economy3 days ago
Economic Warfare and Cognitive Warfare
East Asia3 days ago
China’s soft power and its Lunar New Year’s Culture
South Asia3 days ago
Prime Minister Narendra Modi’s Hug Diplomacy Fails
Economy1 day ago
Agriculture Is Creating Higher Income Jobs in Half of EU Member States but Others Are Struggling