2026 OSSRA report: evaluating the risks in AI-powered open source development

The latest OSSRA report reveals rising challenges in AI-driven open source development, highlighting security and licensing concerns within the software ecosystem.

  • Thursday, 12th March 2026 Posted 1 month ago in by Sophie Milburn
Black Duck has released the 2026 Open Source Security and Risk Analysis (OSSRA) report, highlighting increases in risks related to open source security, licensing, and operations compared with previous years.

The report analysed 947 codebases across 17 industries, providing insight into a software landscape influenced by AI-assisted development. Code, dependencies, and associated risks are being introduced at a faster pace, tracked in the Black Duck KnowledgeBase, a comprehensive open source intelligence repository.

Open source technology appears in 98% of application codebases, indicating widespread exposure to third-party risk. The integration of AI-generated code introduces additional risks not previously captured at scale.

Key findings include:
  • Expanding Attack Surface: The report shows average vulnerabilities per codebase increased by 107%. Open source component counts rose 30% year-over-year, and the number of files per codebase grew by 74%. The use of AI models creates a new, largely unregulated attack surface.
  • Legal and Licensing Challenges: AI-generated code can create intellectual property (IP) and licensing risks, as models may reproduce code governed by restrictive licenses such as GPL and AGPL. Two-thirds (66%) of audited codebases contained license conflicts, representing a 12% increase from the previous year.
  • Governance in the AI Era: The report identifies a gap in governance maturity. While 76% of organisations assess AI-generated code for security risks, only 54% evaluate IP and licensing concerns, and 56% assess quality. Just 24% conduct comprehensive assessments covering IP, licensing, security, and quality.
The OSSRA notes that organisations may face compliance challenges with upcoming regulations such as the EU Cyber Resilience Act (CRA) unless AI models are tracked and managed with the same rigour as open source components, including maintaining accurate SBOMs and implementing clear AI usage policies.

Jason Schmitt, CEO of Black Duck, said, “The pace at which software is created now exceeds the pace at which most organisations can secure it.”

The Importance of Visibility: Ensuring awareness of what is included in software—whether open source components or AI models—remains a key factor for organisations in maintaining software integrity and responding to stakeholder inquiries.

Westcon-Comstor strengthens ties at RISK Conference 2026

Posted 45 minutes ago by Sophie Milburn
Westcon-Comstor outlines its role in RISK Conference 2026, emphasising regional collaboration and cybersecurity advancements in the Balkans.
Acronis has introduced its GenAI Protection to help manage and secure AI usage in business environments, reflecting a growing focus on controlled AI...
iManage has appointed Ryan Begin as Vice President, Technology Partnerships and Ecosystem Strategy, and David Zember as Vice President, Global...

Talion’s cybersecurity model emphasising SME resilience

Posted 1 hour ago by Sophie Milburn
A look at cybersecurity frameworks developed to better align security practices with operational risks.
OpenText has made its enterprise data solutions available on the AWS European Sovereign Cloud, with the aim of supporting security and governance...
Ci Distribution and 42Gears have partnered to provide resellers with device management and security solutions for use across multiple market sectors.

Node4 appoints new CEO and Non-Executive Chair

Posted 19 hours ago by Sophie Milburn
Node4 has appointed Neil Muller as Chief Executive Officer and Patrick De Smedt as Non-Executive Chair, marking a leadership change as the company...
CrowdStrike has expanded its Cloud Detection and Response capabilities with integration into Google Cloud, providing real-time protection for hybrid...