AI security beyond the model: establishing guardrails

Exploring a framework for AI security and governance focusing on real-world efficacy and accountability in enterprise environments.

  • Tuesday, 24th March 2026 Posted 5 hours ago in by Sophie Milburn
F5, AWS, and Microsoft have collaborated to produce a white paper focused on AI security beyond the model. The document provides guidance for enterprises seeking to implement effective guardrails in AI systems, aiming to address security and governance requirements in complex environments.

AI introduces new risks by expanding attack surfaces, which can expose organisations to operational, legal, and reputational threats. The white paper seeks to provide standardised methods for evaluating AI security solutions and brings together expertise from multiple sectors to address contemporary enterprise challenges.

Critical Capability Areas: The white paper highlights input threat detection, output data exfiltration management, and agentic AI controls to help identify vulnerabilities in AI systems.

Validation Testing Framework: It includes a blueprint for independent and adversarial testing to ensure AI security controls perform effectively under real-world conditions.

Governance and Operational Resilience: The guidance emphasises aligning governance frameworks and strengthening system resilience under stress or failure, maintaining the integrity of AI systems.

The white paper serves as a practical roadmap for enterprises navigating AI-driven transformations. By focusing on real-world efficacy, independent validation, and governance, it aims to provide organisations with the information needed to make informed decisions. It addresses issues such as data exfiltration, prompt injection, and governance failures, offering security teams tools to move beyond vendor claims.

Overall, the resource aims to provide enterprises with the questions and evaluation tools required to implement security measures, supporting the secure and accountable deployment of AI systems.