2026 OSSRA report: evaluating the risks in AI-powered open source development

The latest OSSRA report reveals rising challenges in AI-driven open source development, highlighting security and licensing concerns within the software ecosystem.

  • Thursday, 12th March 2026 Posted 2 weeks ago in by Sophie Milburn
Black Duck has released the 2026 Open Source Security and Risk Analysis (OSSRA) report, highlighting increases in risks related to open source security, licensing, and operations compared with previous years.

The report analysed 947 codebases across 17 industries, providing insight into a software landscape influenced by AI-assisted development. Code, dependencies, and associated risks are being introduced at a faster pace, tracked in the Black Duck KnowledgeBase, a comprehensive open source intelligence repository.

Open source technology appears in 98% of application codebases, indicating widespread exposure to third-party risk. The integration of AI-generated code introduces additional risks not previously captured at scale.

Key findings include:
  • Expanding Attack Surface: The report shows average vulnerabilities per codebase increased by 107%. Open source component counts rose 30% year-over-year, and the number of files per codebase grew by 74%. The use of AI models creates a new, largely unregulated attack surface.
  • Legal and Licensing Challenges: AI-generated code can create intellectual property (IP) and licensing risks, as models may reproduce code governed by restrictive licenses such as GPL and AGPL. Two-thirds (66%) of audited codebases contained license conflicts, representing a 12% increase from the previous year.
  • Governance in the AI Era: The report identifies a gap in governance maturity. While 76% of organisations assess AI-generated code for security risks, only 54% evaluate IP and licensing concerns, and 56% assess quality. Just 24% conduct comprehensive assessments covering IP, licensing, security, and quality.
The OSSRA notes that organisations may face compliance challenges with upcoming regulations such as the EU Cyber Resilience Act (CRA) unless AI models are tracked and managed with the same rigour as open source components, including maintaining accurate SBOMs and implementing clear AI usage policies.

Jason Schmitt, CEO of Black Duck, said, “The pace at which software is created now exceeds the pace at which most organisations can secure it.”

The Importance of Visibility: Ensuring awareness of what is included in software—whether open source components or AI models—remains a key factor for organisations in maintaining software integrity and responding to stakeholder inquiries.
WatchGuard Technologies introduces expanded NDR solutions for enhanced threat detection, offering scalable protection for SMEs and MSPs.
LevelBlue has partnered with SentinelOne to deliver AI-driven security solutions, aimed at enhancing detection and response capabilities.

Cisco introduces new AI security strategies

Posted 11 hours ago by Sophie Milburn
Cisco has introduced security strategies at RSA Conference 2026 aimed at addressing AI-related challenges and supporting wider adoption.
Flashpoint introduces updates to its threat intelligence capabilities, including EASM, Business-Aligned PIRs, and a Managed Attribution Browser.
A new report from OpenText highlights gaps in security and governance as enterprises rapidly adopt AI technologies without necessary risk management...

Enhancing enterprise efficiency with Project SnowWork

Posted 1 day ago by Sophie Milburn
Snowflake introduces Project SnowWork, an AI platform aimed at accelerating workflows and supporting enterprise operations with embedded intelligence.
At NVIDIA GTC, Lenovo introduces new AI solutions aimed at supporting AI deployment across EMEA.
A recent global study explores the role of trust in cybersecurity and its influence on risk and decision-making, highlighting key challenges...