European Parliament suspends AI features over data security concerns

The European Parliament has disabled AI features on official devices due to data security concerns involving external cloud servers.

  • Wednesday, 25th February 2026 Posted 2 weeks ago in by Sophie Milburn

The recent decision by the European Parliament to deactivate built-in AI functionalities on official devices used by lawmakers and staff has sparked substantial discussion. This measure was implemented after IT services concluded their inability to guarantee that sensitive data would remain resident on these devices, raising fears about unauthorised transmission to external cloud servers.

This move reflects an increasing awareness among public bodies about data exposure risks inherent in integrating AI technology into everyday workplace tools. The core issue revolves around potential data breaches when software leveraging external cloud processing might inadvertently expose sensitive information outside controlled networks.

For publicly accountable bodies operating within regulated industries, the temporary AI deactivation reveals pertinent questions. Namely, where precise data processing occurs, the duration it remains stored, the scope of access, and potential reuse for model training or third-party access remain at the forefront of consideration.

When responses to these concerns prove indeterminate, authorities lean towards stringent risk management measures. This includes limiting activities linked to questionable security practices until clear and precise data management protocols are specified by technology providers.

The situation underscores the need for robust preemptive governance and technical assurance. Security evaluations should precede the adoption of any new technology, including AI. Evaluating productivity tools for potential external data processing risks is imperative to avoid unforeseen breaches.

Establishing clear policies, bolstering necessary guardrails, and strengthening controls during the deployment and use of innovative technologies is crucial in ensuring data protection and security. This sentiment is especially resonant given the need to adapt swiftly to rapid technological changes.

Overall, the European Parliament's decision to disable AI features while addressing data security concerns serves as a poignant reminder of the ongoing evolution in governance and policy frameworks, essential in safeguarding sensitive information in today's interconnected digital landscape.

Alteryx introduces an updated Academy platform designed to support learning at different career stages, with personalised pathways aligned to...

The critical role of verified trust in the AI-driven world

Posted 9 hours ago by Sophie Milburn
Explore the impact of continuous, contextual identity verification in enhancing business metrics and closing trust gaps within enterprises.
Kiteworks and Kasm partner to provide secure data management for distributed teams and partner ecosystems.
Red Cactus and Tollring launch AI conversation analytics to support CRM integration across over 200 systems.
F5 reveals new AI-driven security features in its ADSP that aim to enhance application protection and prepare for post-quantum threats.

Alicia Shepherd to lead GTIA's UK & Ireland community

Posted 1 day ago by Sophie Milburn
GTIA appoints Alicia Shepherd as Regional Community Manager to enhance engagement and growth.
F5 Labs introduces advanced threat intelligence resources, aiming to help enterprises assess AI security risks and evaluate AI models’ resilience.

Flotek Group expands reach with FlexiNet acquisition

Posted 4 days ago by Sophie Milburn
Flotek Group broadens its regional presence by acquiring FlexiNet, aiming to enhance managed service offerings in Southern England.