Enterprises must follow five key steps to adopt AI securely

By Omer Grossman, Chief Information Officer, CyberArk.

  • Wednesday, 1st November 2023 Posted 1 year ago in by Phil Alsop

Artificial intelligence (AI) has had a significant impact on organisations. Enterprises implementing the technology have the opportunity to revolutionise every aspect of business operations, from customer-facing applications and services to back-end data and infrastructure. AI is also a great tool to make the workforce feel more engaged and empowered. But with AI adoption comes escalating security risks that business leaders must keep in mind.

One of the main concerns when adopting AI is identity security. In fact, the number of identity-related breaches keeps increasing, and 68% of the companies that experienced such an attack in 2022 report a significant impact on business as a result. With cyberattackers now using AI for deepfake fraud attacks and machine learning-based social engineering, organisations are facing more advanced identity-related attacks which can be difficult to anticipate.

While AI-enabled tools are very useful for IT and security executives to create new business opportunities, it’s vital to be aware of the risks. Businesses must make sure they prioritise security and adapt to the changing threat landscape – and following the below five steps in particular is vital to securely embrace AI and use it to its full potential.

1) Security must be at the core of companies’ AI strategies

Businesses should begin by defining their organisation’s stance on AI while considering its impact on security. Whether your company is already using generative AI at enterprise level or just exploring a proof of concept to test the waters, clarity from the top down is essential. A well-communicated position ensures alignment throughout your organisation and also ensures security processes are established with AI in mind.

2) Raising awareness by encouraging communication

Establishing AI-specific company guidelines and employee training is crucial, but genuinely impactful dialogue is a two-way street. Encouraging employees to share AI-related questions and ideas is a great way to tackle emerging challenges and devise creative AI strategies as a team. Creating cross-functional teams that can address these submissions from all standpoints – including innovation, growth and security – is also important as it ensures employees feel their inputs are adequately actioned.

3) Changing the approach to AI adoption According to the 2023 CyberArk Identity Security Threat Landscape Report, employees in 62% of organisations use unapproved AI-enabled tools, increasing identity security risk. This shows that IT and security leaders have to change the way they approach AI adoption, encouraging AI-powered innovation rather than blocking its potential.

Furthermore, IT departments in a range of industries are experiencing a surge in workforce requests for AI-enabled tools and add-ons. Rather than enforcing blanket ‘no AI’ policies, organisations should instead look to enhance how they vet third-party software and ensure it won’t jeopardise security. AI-fuelled phishing campaigns are becoming increasingly convincing, so having the right level of due diligence ensures workers can benefit from AI without putting security at risk.

4) Understanding what CFOs need

The onus on technology leaders to build operationally efficient platforms and environments continues to grow, particularly given the current economic climate. Demonstrating AI as more than just a nice to have but instead as a tool with real business value is essential to getting buy-in from CFOs. An honest, rational approach backed by hard data is critical; illustrating how a tool can help safely advance multiple business priorities is even more powerful.

5) Staying up to date with the risks of AI

Vigilantly assessing AI-enabled tools before and during their use is the only way to continuously assure their safety. Businesses must be prepared to block and roll back any AI-enabled tool if it’s necessary for security reasons. Ultimately, staying one step ahead of attackers means thinking like one, focusing on the vulnerabilities that an AI tool might pose.

Using AI as a tool to navigate the threat landscape

AI is playing a big role for IT and security teams when it comes to enhancing cybersecurity efforts and resilience. Human talent is still critical for combatting emerging threats, however AI can help bridge some of the gaps caused by the 3.4-million-person cybersecurity worker shortage.

Generative AI also has the potential to transform security functions as it continues to improve. Security operations centres (SOC), are a prime example – some of the more time-intensive security tasks, such as triaging level-one threats or updating security polices, can be automated with the right software. This can save SecOps professionals time to focus on more satisfying work, which could potentially reduce staffing shortages and curb employee turnover and attrition – the second largest contributor to the cyber skills shortage, according to the latest (ISC)2 Cybersecurity Workforce Study.

While today’s rapid technological progress and the increasing use of AI helps organisations improve competitiveness, new challenges arise. But being a true business leader implies being able to make informed decisions, even in situations of uncertainty. By prioritising identity security and keeping an open mind, technology leaders can confidently embrace AI to create new opportunities, without compromising their company’s reputation or losing consumer and employee trust.