The findings point to a widening gap between the pace of AI-driven threats and organisations’ ability to see and manage the risks they face.
AI-powered cybersecurity threats are escalating – and going undetected
The survey found that over two-thirds (71%) of professionals report that AI-powered phishing and social engineering attacks are now more difficult to spot. 58% say AI has made it significantly harder to authenticate digital information, and 38% say their trust in traditional threat detection methods has declined as a result.
Professionals also flag misinformation and disinformation as their top AI risk, named by 87% of respondents, alongside privacy violations (75%) and social engineering (60%). Teams cannot manage what they cannot see, and the tools they previously relied upon are quickly becoming outdated against AI-powered attacks.
AI's impact on cybersecurity is not, however, entirely one-sided, and is proving a valuable defensive tool. 43% say it has improved their organisation's ability to detect and respond to cyber threats, and 34% are already deploying it specifically to enhance cybersecurity.
But realising that defensive potential depends on having the expertise and governance to deploy it effectively – and for too many organisations, both remain limited.
AI is being adopted in the workplace without proper oversight
Concerningly, these threats are developing alongside widespread AI adoption across European workplaces. Formal endorsement is now the norm, with 82% of organisations expressly permitting AI use and 74% permitting generative AI specifically.
AI is being embedded into core operational work: the most popular applications being creating written content (69%), increasing productivity (63%), automating repetitive tasks (54%) and analysing large datasets (52%). The reported benefits are tangible as 77% cite time savings and 40% say AI has increased capacity without additional headcount.
But rapid adoption has not been matched by the governance needed to oversee where and how AI is being used. Only 42% of organisations have a formal, comprehensive AI policy in place and 33% do not require employees to disclose when AI has contributed to work products, leaving significant blind spots across the business.
It is therefore unsurprising that 87% of professionals raise concerns about employees using AI in an unauthorised capacity, and that 26% say their biggest challenge with AI at work is a lack of trust that it adequately protects intellectual property and sensitive information.
Chris Dimitriadis, Chief Global Strategy Officer at ISACA, said: "AI has fundamentally changed the threat landscape. Attackers can now hack at the speed of intent, and too many organisations don't even know whether they've already been on the receiving end. The fact that so many businesses are operating without the governance to see where AI is being used, let alone how, makes that exposure significantly worse.
“Ungoverned AI doesn't just create operational risk. It actively hands an advantage to those who want to cause harm. Closing that gap starts with professional development and advancing the expertise needed to build and embed AI governance that stands up under pressure. Doing so is now a security imperative.”
Building the expertise to match the threat
The governance gap falls on professionals to close, and many do not feel equipped to do so. Over half (54%) say they need to upskill within the next six months to retain their job or advance their career, and 79% say within a year. 41% name the growing skills gap as one of the biggest risks AI poses. Yet a fifth (21%) of organisations still provide no formal AI training at all.
The regulatory environment is adding further urgency. The EU AI Act is the most widely referenced governance framework in the survey, cited by 45% of organisations, ahead of NIST (26%). But over a quarter (26%) of organisations do not yet follow any framework at all – showing a gap between regulatory awareness and action.
Dimitriadis added: "The fundamentals of good risk management have not changed. What has changed is the complexity and speed of what practitioners are now being asked to govern. AI risk requires professionals who can evaluate exposure, embed oversight across the full lifecycle, and advise regulatory best practice. The organisations that invest in that capability now will not only be better protected; they will be better placed to fully realise AI’s benefits. That is the shift credentials like ISACA's Advanced in AI Risk credential are designed to deliver.”