BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Trend Micro, Europol, and UNICRI Publish AI Misuse Report

Trend Micro, Europol, and UNICRI Publish AI Misuse Report

This item in japanese

Trend Micro, Europol’s European Cybercrime Centre (EC3), and United Nations Interregional Crime and Justice Research Institute (UNICRI) have jointly produced a report on current and possible future criminal misuse of AI. The report also includes a set of preparedness recommendations for policymakers, law enforcement, and cybersecurity experts.

The report was announced in a recent press release, and considers both malicious uses of AI, where criminals use AI tools as an attack vector, and abuses of AI, where attackers attempt to exploit AI systems. The report discusses existing uses and abuses that have documented cases as well as possible future uses and abuses, with insights drawn from trends on "underground" forums. To help combat malicious actors, the report includes several recommendations, including using AI as a crime-fighting tool and promoting development of secure AI systems. The report also includes a detailed case study on the malicious use of deepfakes---AI-generated video or audio content that is difficult for humans to identify as inauthentic. According to the report authors,

Building knowledge about the potential use of AI by criminals will improve the ability of the cybersecurity industry in general and law enforcement agencies in particular to anticipate possible malicious and criminal activities, as well as to prevent, respond to, or mitigate the effects of such attacks in a proactive manner.

The bulk of the report covers existing uses and abuses, for which there is documented evidence, including "research outcomes, proofs of concept, or discussions among criminals." These include:

  • AI-Enhanced Malware
  • AI-Supported Password Guessing and CAPTCHA Breaking
  • AI-Aided Encryption
  • Abuse of Smart Assistants

Although some of these applications exist as proofs-of-concept created by cybersecurity researchers, the report does highlight tools discussed by criminals on hacker forums; for example, a CAPTCHA breaking tool that can be rented for weekly or monthly rates and a GitHub repository for a tool that can parse 1.4B credentials to create password generation rules.

The report also examines several trending discussion topics on underground forums to identify possible near-term novel uses and abuses. These include AI-supported hacking, cryptocurrency trading, and online game cheats. There is also a large interest in using AI to impersonate real humans for various applications, including defrauding services such as Spotify or performing "social engineering" attacks.

One technology that can enhance social engineering attacks is the use of deep learning to produce deepfakes---computer-generated audio or video clips that appear authentic. For example, hackers can use "voice cloning" tools to mimic a known authority figure and persuade a victim to transfer money to the criminal's bank account. The report includes a "deep dive" into deepfakes, noting that although there are many possible malicious uses and abuses, the underlying technology does have positive applications such as voice prosthesis, and there are "surprisingly few" reports of misuse. However, there have been cases where deepfake videos were used to damage the reputation of political figures and celebrities. There are also documented uses of deepfake photos in fraudulent passports.

The report includes several recommendations to "enhance preparedness" to address the existing and future threats. First, the use of "AI for Good." This includes using AI technology to fight crime, building trustworthy AI, and promoting responsible AI innovation. Next, conduct further research, including threat assessments and risk management approaches. Then create secure AI design frameworks, technical standards for AI cybersecurity, and data protection rules. Finally, increase outreach efforts such as improving AI literacy and fostering public-private partnerships and multidisciplinary groups.

The growth in AI capabilities and applications in recent years has spurred concerns about its misuse. In 2018, The AI-research non-profit OpenAI released a similar report to "forecast, prevent, and...mitigate the harmful effects of malicious uses of AI." In 2019, OpenAI declined to release its GPT-2 language model, citing "concerns about malicious applications of the technology." To address the threat of deepfakes, a consortium of tech firms, including Microsoft and Facebook, created the Deepfake Detection Challenge, and a research team from Stanford and UC Berkeley are developing an AI to detect deepfakes that purports to detect "more than 80%" of fakes. There is also a push to implement regulations to prevent abuse. For example, earlier this year Alphabet CEO Sundar Pichai called for "sensible" regulation of AI, and the EU considered a ban on facial recognition technology.

Rate this Article

Adoption
Style

BT