AI pivotal for national security decisions– but carries risk
New government-commissioned report outlines the importance of AI in strategic decision-making on national security.
The report, authored by The Turing Institute, states that AI must be viewed as a valuable tool for senior national security decision makers in government and intelligence organisations as well as in supporting analysts to process data quickly and accurately.
Dr Alexander Babuta, Director of The Turing Institute’s Centre for Emerging Technology and Security said:
“Our research has found that AI is a critical tool for the intelligence analysis and assessment community. As the national institute for AI, we will continue to support the UK intelligence community with independent, evidence-based research, to maximise the many opportunities that AI offers to help keep the country safe.”
Jointly commissioned by the Government Communications Headquarters (GCHQ) and the Joint Intelligence Organisation (JIO) and authored by The Turing Institute’s Centre for Emerging Technology and Security (CETaS), the report considers how both the risks and benefits of AI-enriched intelligence should be communicated to senior decision-makers in national security.
With the huge growth of data available for analysis, AI can not just be used to handle the administrative data processing but also to identify patterns, trends, and anomalies beyond human capability. The report authors state that not utilising the technology would be a missed opportunity and could undermine the value of intelligence assessments.
The Deputy Prime Minister Oliver Dowden said:
“We will carefully consider the findings of this report to inform national security decision makers to make the best use of AI in their work protecting the country.”
“We are already taking decisive action to ensure we harness AI safely and effectively, including hosting the inaugural AI Safety Summit and the recent signing of our AI Compact at the Summit for Democracy in South Korea.”
The report also identifies dimensions of uncertainty. Effective communication to those making high-stakes decisions and additional guidance for those using AI-enriched insights within national security is still needed.
Anne Keast-Butler, Director of GCHQ said:
“AI is not new to GCHQ or the intelligence assessment community, but the accelerating pace of change is. In an increasingly contested and volatile world, we need to continue to exploit AI to identify threats and emerging risks, alongside our important contribution to ensuring AI safety and security."
This report follows action already taken by government to ensure the UK is leading the world in the adoption of AI tools across the public sector, as set out in the Deputy Prime Minister's recent speech at Imperial College on AI for Public Good. For example, the government has already begun this work through its Generative AI Framework for HMG, which provides guidance for those working in government on using generative AI safely and securely.