Compliance professionals exposed to AI breaches

A recent survey by compliance eLearning and software provider, VinciWorks, has found that only 29% of compliance professionals have implemented specific procedures, training, or preventive measures to guard against Artificial Intelligence (AI) related compliance breaches. The majority (71%) admitted to lacking such protective measures, with 13% having no plans to address this significant gap in their compliance strategy in the near future.

  • Tuesday, 20th February 2024 Posted 1 year ago in by Phil Alsop

The survey gathered 269 responses from industry leaders across the UK, USA, and Europe, exploring the perception of risks, industry sentiment, and the level of preparedness to address potential compliance issues associated with AI in the workplace.

As AI-powered tools continue to gain prominence in various industries—embedded in tools as diverse as client due diligence and supply chain management to HR and recruitment—concerns are mounting about potential risks. These risks include serious compliance failures such as discrimination, plagiarism, intellectual property theft, and GDPR violations. Adding to the urgency, the impending landmark regulatory European Union's Artificial Intelligence Act, carrying penalties of up to 7% of global turnover for AI misuse, has raised the stakes for organisations.

The survey found that only 3% of respondents have completed AI training at work as part of their yearly compliance training. And an alarming 82% admitted to either not completing AI training or being uncertain about their current status, of which 19% said they have no intentions of participating in any AI training at work. This revelation underscores a significant shortfall in addressing fraud awareness and prevention within organisations.

“In light of these findings, there is an immediate and critical need for comprehensive AI training and risk mitigation procedures within organisations,” says Nick Henderson-Mayo, Director of Learning and Content at VinciWorks. “With AI regulation on the horizon, there’s an immediate need for businesses to invest in comprehensive AI compliance programmes. Using AI in business can be very helpful in some areas. Still, if employees end up using chatbots to write their reports or feed customer data into an AI without permission, that can cause a serious compliance problem.”

Despite the risks, half of the respondents (51%) expressed optimism about its impact on their industries, with 6% feeling very optimistic. Conversely, 12% acknowledged feeling pessimistic, while the majority (37%) adopted a neutral stance, reflecting the varied perspectives within a cross-section of industries.

The survey also explored individual usage of AI in day-to-day work, revealing that 45% of respondents currently leverage AI technologies somewhere in their business. Of these, 12% reported using AI daily. Equally noteworthy is the 45% who, while not currently using AI, express interest in exploring its potential applications in their roles.

Pax8 has been named a Strategic Partner in the UK Government’s AI Skills Boost programme, which aims to provide AI training to ten million workers...

Joseph Vito joins rackspace as senior VP of partnerships

Posted 2 days ago by Sophie Milburn
Rackspace Technology adds Joseph Vito to lead global alliance partnerships.
Acora partners with Securonix to enhance cyber resilience and modernise security operations through a strategic alliance.
TeamViewer partners with Thrive to integrate DEX capabilities into its managed services platform, improving operational visibility and workflow...
Explore Teleport's new framework for integrating AI agents securely into enterprise infrastructure without compromising data integrity.
Veeam Software strengthens its executive team with three new strategic appointments to drive innovation and enhance global partnerships.
Kyndryl unveils its SAP transformations centre of excellence to support AI-driven SAP transformations and enable faster, more cost-effective...

The rising risks of shadow AI in the workplace

Posted 5 days ago by Sophie Milburn
An exploration into the rising use of unauthorised AI tools by employees, posing significant security risks and challenging IT oversight.