The Department for Science, Innovation and Technology (DSIT) has launched a Portfolio of AI Assurance Techniques to support and promote responsible use of this technology.
The portfolio has been developed by the Centre for Data Ethics and Innovation (CDEI) and is designed to support those involved in designing, developing, deploying or procuring AI-enabled systems.
It showcases examples of AI assurance techniques being used in the real-world across a range of sectors to support the development of trustworthy and ethical AI.
As per the portfolio, assurance means AI systems will be measured and evaluated across relevant criteria, including regulation, standards, ethical guidelines and organisational values. It identifies techniques in areas such as impact assessment and evaluation, bias and compliance audits, certification, conformity assessment, performance testing and formal verification.
The CDEI have mapped these techniques to the principles set out in the UK government’s white paper on AI regulation, which outlines five cross-cutting principles for AI regulation: Safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
The portfolio will be developed over time, with future iterations being published using new case studies.
A lack of knowledge
The portfolio comes amid increasing industry appetite for AI, according to research conducted by the CDEI in its industry temperature check. One of the key barriers identified in this research was a significant lack of knowledge and skills regarding AI assurance, with research participants reporting that even if they want to assure their systems, they often don’t know what assurance techniques exist, or how these might be applied in practice across different contexts and use cases.
“The portfolio aims to address this lack of knowledge and help industry to navigate the AI assurance landscape”, Nuala Polo, Senior Policy Advisor at CDEI wrote in a blog post.
The portfolio is the latest in a series of initiatives by the UK Government to support the development and use of tools for trustworthy AI. These include the publication of the roadmap to an effective AI assurance ecosystem in the UK and the establishment of the UK AI Standards Hub to champion the use of international standards.