Two in five UK businesses offer zero guidance on AI in the workplace

Lack of guidance putting businesses at risk, as a quarter (26%) of employees admit to sharing sensitive data with public generative AI tools.

  • Tuesday, 20th February 2024 Posted 1 year ago in by Phil Alsop

Confusion over generative AI in the workplace is simultaneously creating a divide between employees while also putting organisations at risk, according to new research from Veritas Technologies, the data management expert.

Almost half (49%) of UK office workers are using generative AI at least once a week, signalling a massive shift is underway in workplaces that is revolutionising the way work gets done. In fact, a fifth (19%) are using generative AI tools (such as ChatGPT) at work every single day.

Worryingly, two-fifths (38%) of UK office workers admit that they or a colleague has inputted sensitive information, such as customer, financial or sales data, into a public generative AI tool. Many employees, 60%, fail to recognise that doing so could result in sensitive information leaking outside the corporate walls, with a similar number, 62%, not understanding that this can cause their organisation to run afoul of data privacy compliance regulations.

The data reveals that the UK workforce is frustrated at the current lack of guidance from employers when it comes to the use of public generative AI tools in the workplace. Half (49%) are calling for guidelines or mandatory policies on generative AI use from their bosses as many UK businesses, 44%, currently offer no guidance at all. Businesses who fail to offer guidance not only put themselves at risk from a security perspective; they also risk missing out on the potential value that AI technology could bring to the workplace.

The vast majority (93%) of employees believe guidelines around workplace AI use is important and are crying out for help:

One in four (24%) believe that this would create a more level playing field

68% believe it is essential to know how to use AI tools in the right way

85% believe there should be some form of national or international regulation around AI

Divisions amongst colleagues

The research found that two-fifths of UK office workers (37%) are using AI to do their research, 43% are using it to write their emails, and a fifth (17%) are using generative AI to help write company reports. One in ten (10%) are simply using it to look good in front of their boss.

The use of generative AI is leading to drastic divisions between British co-workers, with 29% believing that colleagues who are using it should be reported to line managers, and a quarter believing they should either get a pay cut (23%) or face disciplinary action (25%).

The research found the issue also stokes division between different groups – especially between older and younger workers. Whilst 80% of 18–24-year-olds are using it regularly at work, almost two thirds (63%) of 55–64-year-olds have never even used it.

Sonya Duffin, Solutions Lead at Veritas Technologies, said: “Without guidance from leaders on how or if to utilise generative AI, some employees are using it in ways that put their organisations at risk, even as others hesitate to use it at all and resent their colleagues for doing so. Neither situation is ideal. Organisations could face regulatory compliance violations or miss out on opportunities to increase efficiency across their entire workforce. Both issues can be resolved with effective generative AI guidelines and policies on what’s OK and what’s not.”

“The message is clear: thoughtfully develop and clearly communicate guidelines and policies on the appropriate use of generative AI and combine that with the right data compliance and governance toolset to monitor and manage their implementation and ongoing enforcement. Your employees will thank you and your organisation can enjoy the benefits without increasing risk”. 

AI agents break cover

Posted 1 day ago by Phil Alsop
In a global survey of IT leaders, Cloudera found that enterprises are keen on AI agents, but fears around data privacy, integration, and data quality...
Economist Impact is pleased to announce the inaugural AI Compute summit, scheduled for May 22nd 2025, at the Scandic Copenhagen in Copenhagen. This...

Majority of AI projects don't make it to market

Posted 1 day ago by Phil Alsop
SS&C Technologies Holdings has published findings from a new survey: governance, process orchestration and strategic planning are critical to...

Security and compliance risks make VPNs obsolete

Posted 1 day ago by Phil Alsop
Zscaler has published the Zscaler ThreatLabz 2025 VPN Risk Report, commissioned by Cybersecurity Insiders, which highlights the widespread security,...

AI tops tech growth charts

Posted 5 days ago by Phil Alsop
Despite high interest rates, economic slowdown, stricter regulations on big tech and AI, Trump's tariff policies, and global trade wars, which hit...

94% increase in network malware

Posted 6 days ago by Phil Alsop
Other key findings show an increase in crypto miner detections, a spike in zero-day malware, a drop in endpoint malware, a rise in Linux-based...

Data is not AI-ready

Posted 6 days ago by Phil Alsop
Despite rapid hybrid cloud adoption, enterprises struggle with file data migration, falling behind in AI-driven efficiencies and effective security.
96% of organizations attacked by ransomware said backups were targeted.