Dan Brown's 'Digital Fortress' Scenario Looms Large From AI-Powered Cyberattacks

With hackers slowly turning to artificial intelligence, cyberattacks would become sophisticated and difficult to prevent as per the report by the UNICRI and Europol

As Dan Brown depicted in his novel 'Digital Fortress', a new threat from artificial intelligence (AI) looms large. A new report by the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Europol says that AI is going to make cyber attacks far more dangerous and hard to predict.

It's fair to say that AI is the new trend with around one-third of businesses implementing it in some form. Hence, it is going to be one of the main drivers of economic growth in the coming years. AI evolves through machine learning where algorithms train the model to recognize patterns in information. For example, intelligent product recommendation based on search patterns is just one example.

In cyberattacks, a lot of aspects are mundane. Such as writing codes to break into firewalls. By training the model, hackers don't need to spend hours searching through vulnerabilities in a network. Rather, an AI model can detect it within a matter of minutes. In 2016, in the Grand Cyber Challenge, organized by the U.S. Defense Advanced Research Projects Agency (DARPA), AI-powered machines could exploit and patch vulnerabilities.

Artificial Intelligence
AI-powered cyberattacks would be more dangerous as per the UNICRI and Europol (representational picture) Pixabay

AI in Cyberattack

Likewise, AI has a big role to play in cybercrime. For example, during a brute-force attack, the hacker has to guess thousands of passwords. The manual labor has already been replaced with machine learning and guesses thousands to millions of passwords in minutes. With AI, it can be even more delicate. By recognizing patterns of the target — lifestyle, date of birth and other combinations that define passwords — it can accurately guess the password without human intervention.

The report, titled Malicious Uses and Abuses of Artificial Intelligence, stated that phishing and deepfakes had become two major concerns. The report, which was published with the help of Trend Micro, said that by utilizing deepfakes, hackers can tailor a phishing attack. Emulating the voice of a trusted figure, hackers can fool people into giving up sensitive information such as passwords, account numbers or PINs.

Another way AI can help is in scaling. By using social engineering, AI can gather intelligence on a potential target. With ransomware attacks, hackers can deploy an AI model that can identify and mask digital footprints left by the attacker.

Server
An AI model can scan through a network and detect vulnerabilities in minutes (representational image) Pixabay

AI-Powered Malware

The worst-case scenario would be AI-powered malware. It has already become so sophisticated that it can evade detection by masking the malicious codes. The report cited the example of AVPASS at the 2017 Black Hat USA conference. It is a tool is designed to deduce the antivirus engines by disguising the malware as a benign application. AVPASS managed to evade the detection service of VirusTotal that had over 5,000 Android malware samples, becoming an operational undetectable malware.

However, the report said that using AI to improve the effectiveness of malware was still in its infancy. The research of utilizing AI was only theoretical and was only limited to proof-of-concept by cybersecurity researchers.

"Nonetheless, the AI-supported or AI-enhanced cyberattack techniques that have been studied are proof that criminals are already taking steps to broaden the use of AI. Such attempts, therefore, warrant further observation to stop these current attempts and prepare for future attacks as early as possible before these become mainstream," the report said.

Phishing emails
By using AI, hackers can develop social engineering tool to target a victim through sophisticated spear-phishing attack Pixabay

How to Mitigate the Risk?

The only way to mitigate the threat of AI in cyberattacks is by deploying another AI. Cybersecurity researchers have already been studying the use of AI in detecting vulnerabilities and threats. With over 10 billion cyberattacks happening every year, it is almost impossible to track all the threats. AI models can be trained with existing data sets to identify such threats and take preventive measures. Such AI models are already being used by some cybersecurity companies and researchers.

"AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology," Edvardas Sileris, head of Europol's European Cybercrime Centre said in a statement. He added that the report will help in anticipate such misuses and prevent the threats.

Related topics : Cybersecurity Artificial intelligence
READ MORE