Compliance professionals exposed to AI breaches

A recent survey by compliance eLearning and software provider, VinciWorks, has found that only 29% of compliance professionals have implemented specific procedures, training, or preventive measures to guard against Artificial Intelligence (AI) related compliance breaches. The majority (71%) admitted to lacking such protective measures, with 13% having no plans to address this significant gap in their compliance strategy in the near future.

  • 8 months ago Posted in

The survey gathered 269 responses from industry leaders across the UK, USA, and Europe, exploring the perception of risks, industry sentiment, and the level of preparedness to address potential compliance issues associated with AI in the workplace.

As AI-powered tools continue to gain prominence in various industries—embedded in tools as diverse as client due diligence and supply chain management to HR and recruitment—concerns are mounting about potential risks. These risks include serious compliance failures such as discrimination, plagiarism, intellectual property theft, and GDPR violations. Adding to the urgency, the impending landmark regulatory European Union's Artificial Intelligence Act, carrying penalties of up to 7% of global turnover for AI misuse, has raised the stakes for organisations.

The survey found that only 3% of respondents have completed AI training at work as part of their yearly compliance training. And an alarming 82% admitted to either not completing AI training or being uncertain about their current status, of which 19% said they have no intentions of participating in any AI training at work. This revelation underscores a significant shortfall in addressing fraud awareness and prevention within organisations.

“In light of these findings, there is an immediate and critical need for comprehensive AI training and risk mitigation procedures within organisations,” says Nick Henderson-Mayo, Director of Learning and Content at VinciWorks. “With AI regulation on the horizon, there’s an immediate need for businesses to invest in comprehensive AI compliance programmes. Using AI in business can be very helpful in some areas. Still, if employees end up using chatbots to write their reports or feed customer data into an AI without permission, that can cause a serious compliance problem.”

Despite the risks, half of the respondents (51%) expressed optimism about its impact on their industries, with 6% feeling very optimistic. Conversely, 12% acknowledged feeling pessimistic, while the majority (37%) adopted a neutral stance, reflecting the varied perspectives within a cross-section of industries.

The survey also explored individual usage of AI in day-to-day work, revealing that 45% of respondents currently leverage AI technologies somewhere in their business. Of these, 12% reported using AI daily. Equally noteworthy is the 45% who, while not currently using AI, express interest in exploring its potential applications in their roles.

Acquisition of leading DSPM company will bolster Proofpoint’s human-centric security platform...
NTT DATA’s new Managed Detection & Response service powered by Palo Alto Networks Cortex XSIAM...
SPG is enhancing its cybersecurity capabilities in a new partnership with Saviynt, a leading...
Graylog has unveiled significant security advancements to drive smarter, faster, and more...
Datadog has published its new report, the State of Cloud Security 2024. The report found that...
ISACA research shows automating threat detection/response and endpoint security are the most...
Strategic partnership unifies AI-native endpoint security and next-generation firewall protection...
Advanced forms of social engineering are on the rise, though obvious gaps like weak passwords are...