Compliance professionals exposed to AI breaches

A recent survey by compliance eLearning and software provider, VinciWorks, has found that only 29% of compliance professionals have implemented specific procedures, training, or preventive measures to guard against Artificial Intelligence (AI) related compliance breaches. The majority (71%) admitted to lacking such protective measures, with 13% having no plans to address this significant gap in their compliance strategy in the near future.

The survey gathered 269 responses from industry leaders across the UK, USA, and Europe, exploring the perception of risks, industry sentiment, and the level of preparedness to address potential compliance issues associated with AI in the workplace.

As AI-powered tools continue to gain prominence in various industries—embedded in tools as diverse as client due diligence and supply chain management to HR and recruitment—concerns are mounting about potential risks. These risks include serious compliance failures such as discrimination, plagiarism, intellectual property theft, and GDPR violations. Adding to the urgency, the impending landmark regulatory European Union's Artificial Intelligence Act, carrying penalties of up to 7% of global turnover for AI misuse, has raised the stakes for organisations.

The survey found that only 3% of respondents have completed AI training at work as part of their yearly compliance training. And an alarming 82% admitted to either not completing AI training or being uncertain about their current status, of which 19% said they have no intentions of participating in any AI training at work. This revelation underscores a significant shortfall in addressing fraud awareness and prevention within organisations.

“In light of these findings, there is an immediate and critical need for comprehensive AI training and risk mitigation procedures within organisations,” says Nick Henderson-Mayo, Director of Learning and Content at VinciWorks. “With AI regulation on the horizon, there’s an immediate need for businesses to invest in comprehensive AI compliance programmes. Using AI in business can be very helpful in some areas. Still, if employees end up using chatbots to write their reports or feed customer data into an AI without permission, that can cause a serious compliance problem.”

Despite the risks, half of the respondents (51%) expressed optimism about its impact on their industries, with 6% feeling very optimistic. Conversely, 12% acknowledged feeling pessimistic, while the majority (37%) adopted a neutral stance, reflecting the varied perspectives within a cross-section of industries.

The survey also explored individual usage of AI in day-to-day work, revealing that 45% of respondents currently leverage AI technologies somewhere in their business. Of these, 12% reported using AI daily. Equally noteworthy is the 45% who, while not currently using AI, express interest in exploring its potential applications in their roles.

Versa has announced the general commercial availability of Versa Sovereign SASE™, uniquely...
FortiAnalyzer leverages a unified data lake, FortiGuard Labs threat intelligence, and AI-driven...
This continued commitment to the European Commission’s Cybersecurity Skills Academy focuses on...
Other key findings include a resurgence of cryptomining malware, an increase in signature-based and...
HackerOne has published When ROI Falls Short: A Guide to Measuring Security Investments with Return...
The next version of Prisma Cloud adds AI-powered prioritization, automated remediation, and a new...
Threats continue to increase from Q3 level – with manufacturing being most targeted industry.
The latest agentic AI innovation from CrowdStrike triages detections with over 98% accuracy within...