JavaScript is not enabled. This will prevent components of the website functioning.
Images will not load. Twitter/Flickr embed will not load. Accordion's will not work. Video content may not load.

POLICY UPDATES

The AI Safety Summit brings together key countries, as well as leading technology organisations, academia and civil society to inform rapid national and international action at the frontier of AI development.

 

The summit forms part of the UK government’s wider commitment to being a global AI leader and a Science and Tech Superpower. We will keep pace with its constant evolution, harness its potential to enhance our lives, be a thought leader on its safe deployment, and foster the tech talent and skills in the UK needed to support our aims. To achieve this, we have:

 

  • Published the pro-innovation AI Regulation White Paper in March 2023 for consultation. We heard from over 400 individuals and organisations during the consultation and will be publishing the government response to the consultation later this year, to ensure we can take into account the outcomes of the AI Safety Summit.
  • Established the Frontier AI Taskforce, chaired by Ian Hogarth, to advance the sociotechnical research that will underpin frontier AI governance.
  • Invested £2.5 billion in AI since 2014, including £900 million announced in March 2023 for a new AI Research Resource and exascale compute capacity. This includes Bristol’s Isambard-AI, expected to be one of Europe’s most powerful supercomputers.
  • Announced £100m for BridgeAI – a programme that will seek to drive use of AI in low-adoption sectors, such as construction, agriculture and creative industries.
  • Invested £290M in a broad package of AI skills initiatives to address the skills gap, support citizens and businesses to take advantage of AI technologies, and drive economic growth.
  • Committed to drive the responsible adoption of AI to deliver world-class public services, through a whole government approach as set out in the National AI Strategy.

 

Updates, outcomes, and detailed information about summit policy, and broader government action on how we can make the most of AI, is set out below.

UK GOVERNMENT UPDATES

AI Safety Summit 2023: Roundtable Chair's Summaries, 2 November

A summary of the discussions which took place at the AI Safety Summit at Bletchley Park.

03 November 2023

AI Safety Summit 2023: Chair’s statement – state of the science, 2 November

The Chair’s statement on a ‘State of the Science’ Report to understand the capabilities and risks of Frontier AI.

02 November 2023

AI Safety Summit 2023: Chair’s statement – safety testing, 2 November

The Chair’s statement on session outcomes from the discussion on safety testing.

02 November 2023

AI Safety Institute: overview

An introduction to the AI Safety Institute, the first state-backed organisation focused on advanced AI safety for the public interest.

02 November 2023

AI Safety Summit 2023: Chair’s statement, 2 November

The Chair’s summary of the discussions which took place at the AI Safety Summit at Bletchley Park

02 November 2023

AI SAFETY SUMMIT: ROUNDTABLE CHAIR'S SUMMARIES, 1 NOVEMBER

A summary of the discussions which took place at the AI Safety Summit at Bletchley Park.

01 November 2023

AI SAFETY SUMMIT 2023: THE BLETCHLEY DECLARATION

The Bletchley Declaration by countries attending the AI Safety Summit, 1-2 November 2023

01 November 2023

AI Safety Summit Programme

Programme for the AI Safety Summit 2023, which will take place on the 1 and 2 November at Bletchley Park, Buckinghamshire.

16 October 2023 

AI Safety Summit: An Introduction

Information about the scope of the summit, how the summit defines Frontier AI, and stakeholder and public engagement plans.

Written Ministerial Statement on AI

A Written Ministerial Statement by the Secretary of State (Department for Science, Innovation and Technology) providing an update on the UK Government’s AI policies.

Frontier AI Task Force - Progress Report

The first progress report of the UK Government’s Frontier AI Taskforce on its work to build an AI research team that can evaluate risk at the frontier of AI.

White Paper - A Pro-Innovation Approach to AI Regulation

A White Paper detailing the UK Government’s proposal for a pro-innovation AI regulatory framework.

3 August 2023

Centre for Data Ethics & Innovation - AI assurance techniques portfolio

A portfolio of AI assurance techniques, developed by the CDEI as a resource for the designing, developing, deploying and procuring of AI-enabled systems.

7 June 2023

The UK Science & Technology Framework

The UK Government’s framework setting out its strategic vision for Science & Technology. 

6 March 2023

AI Standards Hub

The AI Standards Hub is an initiative supported by the UK Government dedicated to the evolving and international field of standardisation for AI technologies.

AI Action Plan

The AI Action Plan outlines the activities being taken by each government department to advance the government’s National AI Strategy and cement the UK’s position as an AI leader.

18 July 2022

National AI Strategy

The UK Government’s 10 year plan for promoting AI in the UK.

22 September 2021

COMPANY POLICIES

VIEW POLICY
  • Responsible capability scaling

    Link to Amazon's policy: Responsible capability scaling
  • Red Teaming and Model Evaluations

    Link to Amazon's policy: Red Teaming and Model Evaluations
  • Model Reporting and Information sharing

    Link to Amazon's policy: Model Reporting and Information sharing
  • Security Controls, Including Securing Model Weights

    Link to Amazon's policy: Security Controls, Including Securing Model Weights
  • Reporting Structure for Vulnerabilities

    Link to Amazon's policy: Reporting Structure for Vulnerabilities
  • Identifiers of AI-Generated Material

    Link to Amazon's policy: Identifiers of AI-Generated Material
  • Prioritising Research on Risks Posed by AI

    Link to Amazon's policy: Prioritising Research on Risks Posed by AI
  • Preventing and Monitoring Model Misuse

    Link to Amazon's policy: Preventing and Monitoring Model Misuse
  • Data Input Controls and Audits

    Link to Amazon's policy: Data Input Controls and Audits
VIEW POLICY
  • Responsible capability scaling

    Link to Anthropic's policy: Responsible capability scaling
  • Red Teaming and Model Evaluations

    Link to Anthropic's policy: Red Teaming and Model Evaluations
  • Model Reporting and Information sharing

    Link to Anthropic's policy: Model Reporting and Information sharing
  • Security Controls, Including Securing Model Weights

    Link to Anthropic's policy: Security Controls, Including Securing Model Weights
  • Reporting Structure for Vulnerabilities

    Link to Anthropic's policy: Reporting Structure for Vulnerabilities
  • Identifiers of AI-Generated Material

    Link to Anthropic's policy: Identifiers of AI-Generated Material
  • Prioritising Research on Risks Posed by AI

    Link to Anthropic's policy: Prioritising Research on Risks Posed by AI
  • Preventing and Monitoring Model Misuse

    Link to Anthropic's policy: Preventing and Monitoring Model Misuse
  • Data Input Controls and Audits

    Link to Anthropic's policy: Data Input Controls and Audits
VIEW POLICY
  • Responsible capability scaling

    Link to DeepMind's policy: Responsible capability scaling
  • Red Teaming and Model Evaluations

    Link to DeepMind's policy: Red Teaming and Model Evaluations
  • Model Reporting and Information sharing

    Link to DeepMind's policy: Model Reporting and Information sharing
  • Security Controls, Including Securing Model Weights

    Link to DeepMind's policy: Security Controls, Including Securing Model Weights
  • Reporting Structure for Vulnerabilities

    Link to DeepMind's policy: Reporting Structure for Vulnerabilities
  • Identifiers of AI-Generated Material

    Link to DeepMind's policy: Identifiers of AI-Generated Material
  • Prioritising Research on Risks Posed by AI

    Link to DeepMind's policy: Prioritising Research on Risks Posed by AI
  • Preventing and Monitoring Model Misuse

    Link to DeepMind's policy: Preventing and Monitoring Model Misuse
  • Data Input Controls and Audits

    Link to DeepMind's policy: Data Input Controls and Audits
VIEW POLICY
  • Responsible capability scaling

    Link to Inflection's policy: Responsible capability scaling
  • Red Teaming and Model Evaluations

    Link to Inflection's policy: Red Teaming and Model Evaluations
  • Model Reporting and Information sharing

    Link to Inflection's policy: Model Reporting and Information sharing
  • Security Controls, Including Securing Model Weights

    Link to Inflection's policy: Security Controls, Including Securing Model Weights
  • Reporting Structure for Vulnerabilities

    Link to Inflection's policy: Reporting Structure for Vulnerabilities
  • Identifiers of AI-Generated Material

    Link to Inflection's policy: Identifiers of AI-Generated Material
  • Prioritising Research on Risks Posed by AI

    Link to Inflection's policy: Prioritising Research on Risks Posed by AI
  • Preventing and Monitoring Model Misuse

    Link to Inflection's policy: Preventing and Monitoring Model Misuse
  • Data Input Controls and Audits

    Link to Inflection's policy: Data Input Controls and Audits
VIEW POLICY
  • Responsible capability scaling

    Link to Meta's policy: Responsible capability scaling
  • Red Teaming and Model Evaluations

    Link to Meta's policy: Red Teaming and Model Evaluations
  • Model Reporting and Information sharing

    Link to Meta's policy: Model Reporting and Information sharing
  • Security Controls, Including Securing Model Weights

    Link to Meta's policy: Security Controls, Including Securing Model Weights
  • Reporting Structure for Vulnerabilities

    Link to Meta's policy: Reporting Structure for Vulnerabilities
  • Identifiers of AI-Generated Material

    Link to Meta's policy: Identifiers of AI-Generated Material
  • Prioritising Research on Risks Posed by AI

    Link to Meta's policy: Prioritising Research on Risks Posed by AI
  • Preventing and Monitoring Model Misuse

    Link to Meta's policy: Preventing and Monitoring Model Misuse
  • Data Input Controls and Audits

    Link to Meta's policy: Data Input Controls and Audits
VIEW POLICY
  • Responsible capability scaling

    Link to Microsoft's policy: Responsible capability scaling
  • Red Teaming and Model Evaluations

    Link to Microsoft's policy: Red Teaming and Model Evaluations
  • Model Reporting and Information sharing

    Link to Microsoft's policy: Model Reporting and Information sharing
  • Security Controls, Including Securing Model Weights

    Link to Microsoft's policy: Security Controls, Including Securing Model Weights
  • Reporting Structure for Vulnerabilities

    Link to Microsoft's policy: Reporting Structure for Vulnerabilities
  • Identifiers of AI-Generated Material

    Link to Microsoft's policy: Identifiers of AI-Generated Material
  • Prioritising Research on Risks Posed by AI

    Link to Microsoft's policy: Prioritising Research on Risks Posed by AI
  • Preventing and Monitoring Model Misuse

    Link to Microsoft's policy: Preventing and Monitoring Model Misuse
  • Data Input Controls and Audits

    Link to Microsoft's policy: Data Input Controls and Audits
VIEW POLICY
  • Responsible capability scaling

    Link to OpenAI's policy: Responsible capability scaling
  • Red Teaming and Model Evaluations

    Link to OpenAI's policy: Red Teaming and Model Evaluations
  • Model Reporting and Information sharing

    Link to OpenAI's policy: Model Reporting and Information sharing
  • Security Controls, Including Securing Model Weights

    Link to OpenAI's policy: Security Controls, Including Securing Model Weights
  • Reporting Structure for Vulnerabilities

    Link to OpenAI's policy: Reporting Structure for Vulnerabilities
  • Identifiers of AI-Generated Material

    Link to OpenAI's policy: Identifiers of AI-Generated Material
  • Prioritising Research on Risks Posed by AI

    Link to OpenAI's policy: Prioritising Research on Risks Posed by AI
  • Preventing and Monitoring Model Misuse

    Link to OpenAI's policy: Preventing and Monitoring Model Misuse
  • Data Input Controls and Audits

    Link to OpenAI's policy: Data Input Controls and Audits
Skip to content