Shining light on your organisation’s shadow tools: “Are you really ready for AI?”

By Jon Bance, chief operating officer at Leading Resolutions.

  • Monday, 15th September 2025 Posted 2 hours ago in by Phil Alsop

The age of AI creates a new arena for data breaches and leaks, only expanding the security risks businesses face. Although AI is clearly playing a massive role across defence and boosting cyberattacks from malicious actors, your organisation’s unregulated, hidden tools could be just as dangerous for data loss. 

According to Gartner’s Quarterly Emerging Risk Report, one of the top five emerging risks facing organisations worldwide is ungoverned employee use of external tools, also known as “Shadow AI”, like Shadow IT before it. This isn’t a distant scenario; your teams are already using unprotected tools to increase productivity, despite any limitations stated via current IT policies.

This eventually exposes your company to critical data exposure. When your staff members are actively exposing sensitive company data to publicly available tools, nefarious cybercriminals don’t need to steal it themselves. Businesses need to put AI policies into place immediately and concentrate on training their staff to not only take advantage of new technologies but also mitigate risks currently being introduced to their network.

The hidden threat of shadow AI

By now, everyone is aware of the existence of generative AI assets, whether they are actively using them or not. However, without a proper ruleset in place, everyday employee actions can quickly become security nightmares. When an organisation doesn’t regulate an approved framework of AI tools in place, its employees will commonly turn to using these applications across everyday actions. 

This can be everything from employees pasting sensitive client information or proprietary code into public generative AI tools to developers downloading promising open-source models from unverified repositories. Third-party vendors are already, quietly, integrating AI-boosted features into software your teams may already use, without formal notification. From a security perspective, individuals and entire teams alike are choosing to integrate custom AI solutions to solve immediate problems, ignoring company cybersecurity reviews entirely. 

The numbers agree. Gartner’s recent 2025 Cybersecurity Innovations in AI Risk Management and Use survey highlighted that 79% of cybersecurity leaders suspect employees are misusing approved GenAI tools, and yet 69% reported that prohibited tools are still being used anyway. Perhaps most alarmingly, 52% believe custom AI is being built without any risk checks, a recipe for intellectual property leakage and severe compliance breaches. 

Most organisations lack awareness 

The root cause of turning to Shadow AI isn’t malicious intent. In the absence of clear policies, training and oversight, and the increased pressure of faster, greater delivery, people will naturally seek the most effective support to get the job done. Unlike cyber actors, aiming to disrupt and exploit business infrastructure weaknesses for a hefty payout, employees aren’t leaking data outside of your organisation intentionally. AI is simply an accessible, powerful tool that many find exciting.

Teams are constantly being pushed to increase output and efficiency. But where there is trust from companies in their employees to perform, that doesn’t always equate to clear AI governance and visibility of access from IT teams. Yet even with more prohibitive policies in place, employees will still find workarounds to make ends meet. Shadow AI isn’t just a problem with technology, but a problem of process and culture as well. 

Building a proactive AI-first strategy

Codifying your AI governance policies should be a priority, as you cannot manage what you haven’t defined. Establishing clear, practical rules for what tools are acceptable in your organisation, and what aren’t, including AI-specific data handling rules and embedding AI reviews into third-party procurement. A balanced, strategic approach to address these challenges requires more than just direction from your IT team; it must come directly from the C-suite.

Regardless, you cannot protect against what you can’t see. Tools like Data Loss Prevention (DLP) and Cloud Access Security Brokers (CASB), which detect unauthorised AI use, must be an essential part of your security monitoring toolkit. Ensuring these alerts connect directly to your SIEM and defining clear processes for escalation and correction are also key for maximum security. 

AI literacy must come in tandem with this, integrated directly into company culture. This means educating teams on the real-world risks and the ways to innovate operational lines responsibly, not just efficiently. The most effective way to combat Shadow AI use in your organisation is to provide a better, safer and more secure alternative. Enforcing a collaborative culture that can openly share best AI practices is also essential: don’t just say “no” to public tools, but provide an avenue of “yes, and here’s how you do it securely.”

The first step is assessing readiness

A professional readiness assessment must be your first step, as it identifies the gaps in your organisation and allows a path to building the right, resilient foundation. This includes an overview of your current technology and AI environment, including any hidden risks, reviewing existing policies and monitoring capabilities. Prioritising AI use cases that can deliver tangible value without compromising control is key. 

Building your AI roadmap that balances innovation with governance and security is critical before opening the floodgates and bringing Shadow AI into the light. When it comes to new and emerging technologies, your business mindset shouldn’t just be thinking about what these tools can do, but how you can best control them within your organisation.

As AI reshapes never-thought-of industries worldwide, software development is undergoing a revolution of its own. We went from low code to no code,...
By Steve Flavell, Co-CEO and Co-Founder at multinational cloud telephony provider, LoopUp.
By Richard Hall, AVP at DigiCert.
By Brenton O'Callaghan, Chief Product Officer, Avantra.
By Ben Coleman, Wholesale Regional Partner Manager for EMEA at 11:11 Systems.

Rethink Your Backup and Recovery Plan Strategy

Posted 1 week ago by Phil Alsop
By Ben Coleman, Wholesale Account Director UK at 11:11 Systems.
Carlos Buenano, CTO for OT at Armis, outlines how cyber exposure management is reshaping protection strategies for remote and critical systems.

Unlocking the potential of AI through connectivity

Posted 1 week ago by Phil Alsop
By Paul McHugh, Head of Sales EMEA at Ericsson Enterprise Wireless Solutions.