Why creating an AI safe space will help, not hinder, your business

By Vanessa Anness, Head of Consultancy, Ricoh Europe.

  • Wednesday, 1st November 2023 Posted 1 year ago in by Phil Alsop

The advent of accessible AI tools – such as ChatGPT - has led to a growing number of people experimenting with and adopting these technologies in their daily lives. As AI technology continues to evolve rapidly and become more widely used, these tools will likely remain part of our lives for the foreseeable future.  

 

This is supported by Ricoh Europe’s latest research which found that 48% of European employees use generative AI tools at work in some capacity, with 18% doing so on a daily basis. These technologies have the potential to change workforce requirements for the better, by reducing the need for workers to carry out repetitive, laborious tasks. 

 

At the same time, our research uncovered that many organisations are lagging in providing employees with the necessary training and advice around the use and deployment of these emerging technologies in the workplace. Without proper guidance on how to use AI appropriately, employees may end up using the tools in ways that could compromise security, privacy, regulatory compliance, or result in biased and inaccurate outputs. 

 

To ensure your workforce can harness the power of AI tools in a safe and compliant manner, organisations need to take proactive steps to ensure that they are guiding their employees to use these technologies in ways that not only harness business value, but also adhere to legal and ethical standards. 

 

Implement clear company guidelines  

Our research found that only 18% of employers have implemented risk management measures around AI. This is despite the copyright and privacy risks that are often associated with these technologies. 

 

Generative AI tools can expose businesses to a multitude of hazards if deployed carelessly. These models can produce biased, inaccurate, or outright false outputs if provided poor quality training data or prompts. Relying on AI tools without proper human validation can lead to the unchecked spread of misinformation - internally, as well as externally. At a minimum, businesses must ensure that review processes are in place to catch erroneous AI outputs before they cause harm.  

 

But reviewing outputs is only one piece of the puzzle. Comprehensive governance of AI usage requires establishing clear policies around what types of data can be used for training models and how AI generated content is used. Policies around how synthetic data is used in the workplace, as well as training employees in the art of prompt engineering, is a necessity to prevent biases in outputs, which could lead to inaccurate or poor-quality work. It is also advisable to involve legal teams from the ideation stage, to avoid potential risks later down the line and help accelerate your AI maturity. 

 

Create a harmonious culture  

Companies should also foster an environment where employees feel comfortable reporting issues or unintended consequences, so that corrective actions can be taken when needed. With AI technology in its infancy, and employees still learning how best to use them, mistakes will happen. A culture that promotes fear of reprisal could lead to misinformation and poor technical compliance practices.  

 

By cultivating a culture of learning, companies can accelerate their collective knowledge of AI. Allowing small missteps enables staff to gain experience faster, and helps the business avoid larger errors that could have significant consequences down the road. Going one step further, companies could consider implementing dedicated time for their staff to experiment with AI technology. For example, Google's policy of allowing employees to devote 20% of their time to building new skills could be replicated by others to offer similar initiatives focused on developing AI technology expertise within workforces. 

 

Collaborate with a reliable partner  

Deploying AI can be daunting for businesses that are unsure of where to start or how to manage employee usage. Here, it’s worth considering collaborating with a trusted partner, who can provide guidance and expertise to implement AI tools that add value and mitigate risk.  

 

A good partner will also stay on top of the latest AI developments, ensuring you can innovate with confidence and in full compliance. 

 

The AI revolution is here to stay 

Businesses must take control of their AI future through workforce education, ethical guidelines and help from experienced partners. With prudent preparation, companies can unlock AI’s immense potential to empower employees and transform operations.  

 

The opportunity to combine people and technology in new ways that drive productivity, innovation and competitive advantage is here. Organisations that act decisively will thrive in the age of artificial intelligence.