More code and faster attacks mean a rethink for vulnerability management

By Sylvain Cortes, VP Strategy at Hackuity.

  • Saturday, 2nd May 2026 Posted 1 hour ago in by Phil Alsop

The relationship between vulnerability disclosure and exploitation is fundamentally changing. Where security teams once had weeks to assess and respond to most newly reported CVEs, attackers are now moving in a matter of days. Research has found that the exploitation of high and critical vulnerabilities rose 105% year-on-year in 2025, with the median time from disclosure to active exploitation falling to just five days.

AI-powered developments are also fundamentally changing the game; whilst software is built and shipped faster than ever, AI is now accelerating the exploitation discovery side of that equation too. One of the most vivid examples of this is Anthropic's Mythos, a tool deemed so powerful it was restricted from public release because of its capabilities. The model identified more than 2,000 previously unknown vulnerabilities across every major operating system and web browser in the space of just seven weeks.

As all aspects of vulnerability management accelerate, many teams are still relying on programmes designed around periodic review cycles and reactive remediation built for a different era. With both the volume of code and threats continually expanding, organisations have no time to waste in modernising the processes to prioritise and address vulnerabilities.

Why VM can be a liability without the right direction

Prioritisation is one of the biggest issues holding back VM programmes, and for many organisations, vulnerability management is still structured around compliance. It’s a sensible foundation, especially for organisations operating in highly regulated fields. It means frameworks are auditable, deadlines are measurable, and leadership dashboards look reassuring.

The problem is that compliance-driven programmes are optimised for demonstrating control, not achieving it. Our research found that 43% of organisations prioritise vulnerabilities primarily through a compliance lens, while only 36% have adopted a genuinely risk-based approach. Compliance frameworks establish baselines, but they are rarely designed to reflect an organisation's specific threat profile or the real-world exploitability of what sits in its environment.

When severity scores and SLA adherence become the primary measures of success, teams are effectively forced to treat vastly different risks as equally urgent. Effort gets spread thinly across low-impact issues while genuinely dangerous exposures accumulate in the backlog.

What makes this particularly difficult to address is that, on paper, everything appears to be working. Tickets are being closed, audits are being passed, and activity levels are high.

But we found that the mean time to remediate critical vulnerabilities is still four weeks. Set against an exploitation window that has now collapsed to five days, that gap is a risk few organisations can afford.

AI coding tools are widening the attack surface

These issues are only getting worse over time. Alongside threat actors that are finding and exploiting CVEs more quickly, we’re also seeing far more software being created, resulting in a rapidly increasing attack surface.

AI coding is accelerating this trend, with platforms such as OpenAI's Codex and Anthropic's Claude Code having fundamentally lowered the barrier to building and shipping software. These tools are enabling developers to produce more code, more quickly, across more of the stack than was previously possible.

The pace at which that code is being written is outpacing the processes designed to assess it, increasing the likelihood that vulnerable software reaches production.

Meanwhile, AI is also acting as a force multiplier on the offensive side, compressing the time required for reconnaissance, automating attack scripting, and accelerating the gap between vulnerability research and active exploitation.

Security teams are caught in the middle, defending this expanding attack surface with VM processes built for a slower, more predictable environment.

The urgent need for risk-based prioritisation

Addressing this requires more than incremental improvement to existing processes. It requires a major strategic change that redefines how vulnerabilities are assessed and dealt with.

Risk-based prioritisation assesses vulnerabilities in terms of exploitability, asset criticality and business impact. Two vulnerabilities carrying identical severity scores can represent entirely different levels of organisational risk depending on whether one affects a customer-facing platform or an isolated internal system. Without that context, teams are triaging in the dark.

The data suggests most organisations already sense this. Our research found that 83% incorporate some form of threat intelligence into their VM processes - yet only 36% have

embedded it into a coherent, risk-based decision-making model. The intelligence exists, but many lack the framework to act on it.

Automation is the secret ingredient to better prioritisation. Tasks such as deduplication, enrichment and ticket creation should not be consuming analyst time - these are precisely the functions that automated workflows handle well, freeing skilled personnel to focus on judgment-intensive decisions rather than manual processing.

Equally important is tighter alignment between vulnerability management and security operations. Vulnerability data cannot sit in a separate queue, disconnected from detection and response workflows. When these functions operate in isolation, the speed advantage that risk-based prioritisation creates is lost before it reaches the teams who need it most. Remediation effort, to be meaningful, must land in the right place at the right time.

Matching attacker velocity

The organisations best positioned to manage risk in this environment are not necessarily those with the largest security teams or the most tools. They are the ones with the clearest line of sight between technical exposure and business impact - and the processes to act on it before attackers do.

Attackers are not waiting for the next scheduled review; they’re probing for weaknesses at machine speed, capitalising on the gap between disclosure and remediation that compliance-driven programmes were never designed to close. Vulnerability management must be treated as a strategic risk prevention capability - not a routine IT function - with the executive visibility and investment that designation demands.

The window between exploit discovery and attacks being launched is narrowing as AI-supercharges the cybersecurity arms race. The VM programmes that close it fastest will be the ones built around risk, not process.

By Tony Fergusson, CISO in Residence at Zscaler.
By Graham Jarvis, Freelance Business and Technology Journalist.
Flotek Group has grown rapidly in a market known for complexity and fragmentation. In this exclusive conversation, CEO Jay Ball discusses the...
In an exclusive conversation with Isobelle Coventry, this article explores the significant growth trajectory of Evergreen and the strategy...
This article is based on an exclusive interview with Steve Wilson, Chief AI Officer at Exabeam, exploring how the rise of the digital workforce is...
By René Klein, Executive Vice President, Europe at Westcon-Comstor.
By Rohit Gupta, UK&I Managing Director at Cognizant.

Why Most Enterprise AI Projects Never Scale


Posted 1 week ago by Phil Alsop
By Chris Riche-Webber, VP of Business Intelligence and Analytics, SmartRecruiters.