The rapid evolution of artificial intelligence is introducing new security considerations, which has led to the publication of three CIS Critical Security Controls (CIS Controls) Companion Guides. These were developed collaboratively by the Center for Internet Security (CIS), Astrix Security, and Cequence Security, and are intended to support security in increasingly complex AI environments.
Recent advances in technologies such as large language models (LLMs), autonomous and semi-autonomous agents, and Model Context Protocol (MCP) integrations have introduced risks that may not be fully addressed by traditional cybersecurity approaches. In response, the Companion Guides provide focused guidance for different parts of the AI ecosystem:
In enterprise environments where AI is increasingly used in production systems—such as copilots and autonomous workflows—risks may include data leakage, unintended agent actions, and misuse of credentials. The Companion Guides aim to provide practical guidance aligned with real-world AI deployment scenarios, focusing on applying established security principles in these contexts.
The guides adapt existing CIS Controls for use in AI-related architectures, supporting their application to systems involving LLMs, agents, and MCP-based integrations. This approach is intended to help security and IT teams assess risks and identify mitigation strategies within AI-enabled systems. Specifically, the guides:
The collaboration between CIS, Astrix, and Cequence reflects an effort to address security considerations associated with AI adoption. The guides emphasise applying existing security principles to AI systems to support safer integration of these technologies into enterprise environments.