Half of GRC professionals struggle to keep pace with changes to compliance requirements

New report from Drata shows the growing complexity of GRC and mixed sentiments on AI.

  • Sunday, 2nd March 2025 Posted 11 months ago in by Phil Alsop

Drata has published the results of its report, titled, The State of GRC 2025: From Cost Center to Strategic Business Driver, with a focus on how governance, risk, and compliance (GRC) professionals are approaching data protection regulations, AI, and ability to maintain customer trust. The report highlights trends, challenges, and an outlook on the future of GRC.

The rise of AI and increasing global regulations have elevated the stakes for businesses, as they navigate complex requirements to protect sensitive data and ensure ethical practices. 96% of respondents cite high-profile breaches and compliance fines as reasons GRC is getting more attention. With surmounting pressure to properly address GRC, 45% of those surveyed are worried about balancing compliance and innovation, data privacy and protection, and maintaining operational resilience. As customer expectations around privacy and transparency grow, The State of GRC Report shows a robust GRC strategy isn’t just a regulatory obligation; it’s a foundation for securing long-term business success. 98% of professionals surveyed believe GRC achievements are worth touting to customers and other critical stakeholders to build internal and external trust.

Additional findings include:

Businesses are experiencing significant consequences due to inadequate compliance postures and processes, from brand safety and reputation issues (51%) to security or data breaches (49%).

48% of GRC professionals struggle to keep pace with updates to existing compliance frameworks and identifying areas needing attention.

100% of companies surveyed expect employees to increase their use of AI technologies in the next 12 months, yet only 10% have a GRC program fully prepared to manage it.

While 46% believe AI will improve regulatory compliance, many shared their fears center around AI biases impacting GRC decision making (43%) and AI hallucinations giving improper GRC guidance (39%).

Manual interventions with GRC account for 14 hours per week on average.

“Governance, risk, and compliance has long been a pain point for organizations, and despite the improvements we’ve seen in recent years, it’s clear many of those challenges still exist today, making it difficult for business to properly maintain their GRC program and effectively maintain trust,” said Matt Hillary, Drata VP of Security and CISO. “In addition to adding more compliance frameworks to their program, security and GRC teams should anticipate significant changes to the GRC function as a result of AI. GRC teams who aren’t prepared for these changes will experience major roadblocks with scaling their compliance programs and up-leveling their organizations to meet these demands.”

Pax8 has been named a Strategic Partner in the UK Government’s AI Skills Boost programme, which aims to provide AI training to ten million workers...

Joseph Vito joins rackspace as senior VP of partnerships

Posted 11 hours ago by Sophie Milburn
Rackspace Technology adds Joseph Vito to lead global alliance partnerships.
Acora partners with Securonix to enhance cyber resilience and modernise security operations through a strategic alliance.
TeamViewer partners with Thrive to integrate DEX capabilities into its managed services platform, improving operational visibility and workflow...
Explore Teleport's new framework for integrating AI agents securely into enterprise infrastructure without compromising data integrity.
Veeam Software strengthens its executive team with three new strategic appointments to drive innovation and enhance global partnerships.
Kyndryl unveils its SAP transformations centre of excellence to support AI-driven SAP transformations and enable faster, more cost-effective...

The rising risks of shadow AI in the workplace

Posted 3 days ago by Sophie Milburn
An exploration into the rising use of unauthorised AI tools by employees, posing significant security risks and challenging IT oversight.