AI-Powered Phishing Shows the Urgency of AI Regulation

Phishing scams are already around three decades old, but they continue to pose serious threats. Now, they are getting an AI upgrade. They have not only become more aggressive in terms of the number of attacks. They have now also grown in sophistication, backed by more convincing personalized messages and cunning tricks. The INTERPOL released a fraud assessment report last month citing the prominent role of artificial intelligence in the rise of phishing and other cases of fraud.

AI’s role in phishing may be a predictable unwelcome development, but many are still unprepared to deal with it. Some are even unaware of the ways artificial intelligence is bolstering deception-based attacks. Phishing is becoming more dangerous than ever, and organizations should at least get acquainted with the problem to know what cyber defenses to put in place.

Smarter and More Effective Phishing

Over the past decade, phishing attacks have been quite obvious. Essentially, they reeked of amateurish efforts with all the grammatical mistakes, catchall greetings, and noticeably random or general messages. Most phishing attacks in the past followed a “see what sticks” approach. Phishers send out large volumes of emails or messages and wait for the unsuspecting victims to respond. These messages lacked any sense of targeting. They are personalized to appeal to the recipient’s interests or concerns.

Things have changed with the rise of more advanced artificial intelligence technology, generative AI in particular. Phishing attacks have shed off most of the attributes that made them detectable such as grammatical and typographical errors. Threat actors can now leverage generative AI to craft unique messages that evade automated phishing detection systems and appear more convincing to the target victims. They have become more effective. Also, the materials used in phishing attacks are considerably easier and faster to produce because of generative AI.

Personalized and Human-Like Messages

One of the most tedious parts of a phishing attack is crafting the messages that will be sent to the victims. These messages should appeal to their recipients for the attack to yield the desired outcome. However,  it is extremely difficult to come up with effective messages. It is even harder to profile all potential victims to determine the situations they are most likely receptive to.

Artificial intelligence addresses these challenges in two ways. First, it analyzes vast amounts of data on potential victims. AI can scour emails, social media posts, and other available information that can provide insights regarding people’s interests and concerns that resonate with them. Then, AI generates personalized messages that elude automated phishing detection systems and are likely to elicit a response from the recipient.

Personalized messages tend to be more effective because they are crafted to read like something real people would write. With Natural Language Processing (NLP), a branch of AI, it is possible to write emails, chat messages, or text messages that are not only grammatically correct but also comparable to human writing. This enables the sending of large volumes of phishing messages with a high likelihood of getting the response desired by the perpetrators.

Voice Cloning

One popular and effective form of phishing is voice phishing or vishing, which entails the use of telephone communication or voice messages to extract information from the victim. It is estimated to be around 12 percent of  all instances of phishing and costs victims over $30 billion annually. With the advent of voice cloning technology, vishing has become more dangerous as it is now possible to accurately copy a person’s voice to deceive someone into doing something.

One prominent use of AI-powered voice phishing has been reported early this year. A US presidential candidate’s opponents employed AI voice cloning to try to suppress votes by discouraging voters from going to the polling sites. Voice cloning technology has already existed even before the rise of more advanced generative AI. However, voice cloning at present is significantly more sophisticated and readily available to anyone who has access to the internet. Numerous websites or apps offer good voice cloning services for free or for a small fee.

Deepfakes

Artificial intelligence has also made phishing through videos more dangerous through deepfakes or fabricated videos showing a person doing something they did not actually do. Deepfakes may also refer to the alteration of a video to replace details in an existing video, like putting celebrity faces into lewd or scandalous videos. Just like voice cleaning, this technology is also within everyone’s reach.

In February this year, an employee of a multinational organization was convinced to clear a $25 million payout to a scammer because of a deepfake. The employee was convinced that he was doing a legitimate transaction after a video call of their supposed Chief Financial Officer. The employee said he was initially suspicious that he was being targeted by a phishing attack. However, he was eventually convinced to proceed with the transaction after conducting a conference call attended by colleagues he recognized.

Clearly, video calls or video messages are no longer reliable means of verifying the authenticity of messages or instructions. Even those who are instinctively cautious can eventually fall for phishing and other forms of cyber attacks because of convincing AI-generated fake images and videos.

The Need for AI Regulation

AI-powered phishing is just one example of the many ways artificial intelligence is becoming a tool for bad actors. It is also one of the many reasons why there are calls for AI regulation. There are many criticisms over the rise of more controls over new technologies like AI, but there are also compelling reasons why it makes sense for governments to intervene.

The US, China, and various European countries have already started collaborating over the need to subject AI models to safety tests. There are numerous legislative actions and regulatory proposals taking shape. They focus on making sure that AI models are transparent and explainable, addressing data bias, and ensuring that AI is not used maliciously or in aid of cyber attacks.

Artificial intelligence system creators like OpenAI have already started imposing controls over their AI models like the enforcement of rules to prevent the generation of certain content deemed abusive, the restriction of some functions to general users, and continuous monitoring. However, it is clear that commitments from private organizations are not enough to guarantee safety and security.

The AI field operates under free market doctrines, which encourage competition. It is inevitable for organizations involved in AI development to try to one-up each other to offer more appealing options to users. In the process, they may fail to take security and safety into account. In the case of AI-aided phishing, for example, it would be difficult for AI systems to impose too many restraints and drive away users into their competitors. To level the playing field, regulations are necessary to force everyone to operate under similar safety and security requirements without curtailing innovation.

AI and phishing demonstrate a potent combination with serious ramifications. It has become quite commonplace, and most organizations are aware of the damages the duo entails. All digital technology users need to prepare for the adverse impact of AI but it is also incumbent upon governments to play a role in mitigating the risks and guiding the development of AI technology towards beneficial uses.

Latest Articles