Extending CIS controls to AI, Agent, and MCP environments

The new CIS Companion Guides provide security guidance for emerging AI environments, including LLMs and MCP integrations.

  • Friday, 24th April 2026 Posted 2 hours ago in by Sophie Milburn

The rapid evolution of artificial intelligence is introducing new security considerations, which has led to the publication of three CIS Critical Security Controls (CIS Controls) Companion Guides. These were developed collaboratively by the Center for Internet Security (CIS), Astrix Security, and Cequence Security, and are intended to support security in increasingly complex AI environments.

Recent advances in technologies such as large language models (LLMs), autonomous and semi-autonomous agents, and Model Context Protocol (MCP) integrations have introduced risks that may not be fully addressed by traditional cybersecurity approaches. In response, the Companion Guides provide focused guidance for different parts of the AI ecosystem:

  • AI LLM Companion Guide: Addresses security considerations for large language models, including risks such as prompt manipulation, context handling issues, and potential exposure of sensitive data.
  • AI Agent Companion Guide: Covers oversight of autonomous and semi-autonomous agents, with attention to safe tool execution, defined levels of autonomy, and controlled system access.
  • MCP Companion Guide: Describes security practices for Model Context Protocol implementations, including the management of Non-Human Identities (NHIs) and ensuring interactions can be audited.

In enterprise environments where AI is increasingly used in production systems—such as copilots and autonomous workflows—risks may include data leakage, unintended agent actions, and misuse of credentials. The Companion Guides aim to provide practical guidance aligned with real-world AI deployment scenarios, focusing on applying established security principles in these contexts.

The guides adapt existing CIS Controls for use in AI-related architectures, supporting their application to systems involving LLMs, agents, and MCP-based integrations. This approach is intended to help security and IT teams assess risks and identify mitigation strategies within AI-enabled systems. Specifically, the guides:

  • Adapt CIS Controls for AI-based architectures, aiming to enable their application to LLMs, agents, and MCP interfaces without introducing separate frameworks.
  • Provide structured, prioritised guidance intended to support secure AI development and deployment practices.
  • Combine established cybersecurity control frameworks with considerations specific to agent-based AI systems and API-driven environments.
  • Cover the AI security lifecycle, including input handling, context processing, agent behaviour, tool usage, and protocol-level access. 

The collaboration between CIS, Astrix, and Cequence reflects an effort to address security considerations associated with AI adoption. The guides emphasise applying existing security principles to AI systems to support safer integration of these technologies into enterprise environments.