The rising threat of AI weaponisation in cybersecurity

AI's accelerated role in creating cyber threats necessitates new security measures.

This week, Anthropic revealed a concerning development - hackers have weaponised its technology for a series of sophisticated cyber-attacks. With artificial intelligence (AI) now playing a critical role in coding, the time required to exploit cybersecurity vulnerabilities is diminishing at an alarming pace.

Kevin Curran, IEEE senior member and cybersecurity professor at Ulster University, highlights the methods attackers employ when using large language models (LLMs) to uncover flaws and expedite attacks. He emphasises the need for organisations to partner robust security practices with AI-specific policies amidst this changing landscape.

Curran explains, "This shows just how quickly AI is changing the threat landscape. It is already speeding up the process of turning proof-of-concepts – often shared for research or testing – into weaponised tools, shrinking the gap between disclosure and attack. An attacker could take a PoC exploit from GitHub, feed it into a large language model and quickly get suggestions on how to improve it, adapt it to avoid detection or customise it for a specific environment. That becomes particularly dangerous when the flaw is in widely used software, where PoCs are public but many systems are still unpatched."

“We’re already seeing hackers use LLMs to identify weaknesses and refine exploits by automating tasks like code completion, bug hunting or even generating malicious payloads designed for particular systems. They can describe malicious behaviour in plain language and receive working scripts in return. While this activity is monitored and blocked on many legitimate platforms, determined attackers can bypass safeguards, for example by running local models without restrictions.

Curran concludes, "The bigger issue is accessibility. Innovation has made it easier than ever to create and adapt software, which means even relatively low-skilled actors can now launch sophisticated attacks. At the same time, we might see nation-states using generative AI for disinformation, information warfare and advanced persistent threats. That’s why security strategies can’t just rely on traditional controls. Organisations need AI-specific defences, clear policy frameworks and strong human oversight to avoid becoming dependent on the same technology that adversaries are learning to weaponise."

As AI continues to evolve, so too does its potential for misuse in cyber arenas. This calls for innovative solutions and strategic thinking to counteract its possible threats, ensuring digital realms remain secure.

AlphaSense's acquisition of Carousel enhances financial modelling with AI, promising faster...
Gcore introduces AI Cloud Stack, enabling CSPs and enterprises to deploy scalable, profitable AI...
Lenovo unveils GPU Advanced Services to help companies enhance workload performance and streamline...
DataVolt brings a revolutionary AI Diploma to Saudi Arabia, equipping locals with industry-ready...
Kyndryl unveils new AI capabilities to enhance efficiency and scale real-world solutions across...
AMD partners with OpenAI for a multi-generational AI infrastructure project, leveraging...
AI chatbots, like ChatGPT, reshape financial consultations in the UK, posing both challenges and...
The numbers are in, and they paint a picture of transformation at unprecedented scale. As MIT's...