Global leaders agree to launch international network of AI Safety Institutes

New agreement between 10 countries and the EU will help establish an international network of publicly backed AI Safety Institutes, after the UK launched the world's first last year.

  • Tuesday, 21st May 2024 Posted 10 months ago in by Phil Alsop

A new agreement between 10 countries plus the European Union, reached at the AI Seoul Summit, has committed nations to work together to launch an international network to accelerate the advancement of the science of AI safety.

The “Seoul Statement of Intent toward International Cooperation on AI Safety Science” will bring together the publicly backed institutions, similar to the UK’s AI Safety Institute, that have been created since the UK launched the world’s first at the inaugural AI Safety Summit – including those in the US, Japan and Singapore.

Coming together, the network will build “complementarity and interoperability” between their technical work and approach to AI safety, to promote the safe, secure and trustworthy development of AI.

This will include sharing information about models, their limitations, capabilities and risks, as well as monitoring specific “AI harms and safety incidents” where they occur and sharing resources to advance global understanding of the science around AI safety.

This was agreed at the leaders’ session of the AI Seoul Summit, bringing together world leaders and leading AI companies to discuss AI safety, innovation and inclusivity.

As part of the talks, leaders signed up to the wider Seoul Declaration which cements the importance of enhanced international cooperation to develop AI that is “human-centric, trustworthy and responsible”, so that it can be used to solve the world’s biggest challenges, protect human rights, and bridge global digital divides.

They recognised the importance of a risk-based approach in governing AI to maximise the benefits and address the broad range of risks from AI, to ensure the safe, secure, and trustworthy design, development, deployment, and use of Al. 

Failure to prioritise testing and integrate generative AI tools raises concerns as agentic AI adds pressure.

CIOs 'overspend' on cloud

Posted 5 days ago by Phil Alsop
43% of CIOs say their CEOs and/or board of directors have concerns about their company’s cloud spend.
Research revealed at Coterie Connect event highlights shifting team structures, evolving skills priorities, and urgent training needed for partner...
Endava has launched its latest research report “AI and the Digital Shift: Reinventing the Business Landscape”.

3,000% surge in enterprise use of AI/ML tools

Posted 1 week ago by Phil Alsop
Zscaler has released the ThreatLabz 2025 AI Security Report, based on insights from more than 536 billion AI transactions processed between February...
Over one in four (28%) British small business owners have used AI tools to help run their business.

Tech fragmentation cited as biggest cyber challenge

Posted 1 week ago by Phil Alsop
New Palo Alto Networks data shows 82% of UK organisations confident in their use of AI, despite AI being identified as biggest cyber risk for 2025.
MIT researchers crafted a new approach that could allow anyone to run operations on encrypted data without decrypting it first.