CDDO publishes pioneering algorithmic transparency standards
The Cabinet Office’s Central Digital and Data Office (CDDO) has developed an algorithmic transparency standard for government departments and public sector bodies with the Centre for Data Ethics and Innovation.
The standard will be piloted by several public sector organisations and further developed based on feedback.
The move makes the UK one of the first countries in the world to develop a national algorithmic transparency standard, strengthening the UK’s position as a world leader in AI governance and building on commitments made in the National AI Strategy and National Data Strategy.
"Algorithms can be harnessed by public sector organisations to help them make fairer decisions, improve the efficiency of public services and lower the cost associated with delivery," said Lord Agnew, Minister of State at the Cabinet Office. "However, they must be used in decision-making processes in a way that manages risks, upholds the highest standards of transparency and accountability, and builds clear evidence of impact."
In its landmark review into bias in algorithmic decision-making, the Centre for Data Ethics and Innovation (CDEI) recommended that the UK government should place a mandatory transparency obligation on public sector organisations using algorithms to support significant decisions affecting individuals.
"Meaningful transparency in the use of algorithmic tools in the public sector is an essential part of a trustworthy digital public sector," said Imogen Parker, Associate Director (Policy) at the Ada Lovelace Institute. "The UK government’s investment in developing this transparency standard is an important step towards achieving this objective, and a valuable contribution to the wider conversation on algorithmic accountability in the public sector."
CDDO has worked closely with the CDEI to design the standard. It also consulted experts from across civil society and academia, as well as the public. The standard is organised into two tiers.
The first includes a short description of the algorithmic tool, including how and why it is being used, while the second includes more detailed information about how the tool works, the dataset/s that have been used to train the model and the level of human oversight.
The standard will help teams be meaningfully transparent about the way in which algorithmic tools are being used to support decisions, especially in cases where they might have a legal or economic impact on individuals.
The standard will be piloted by several government departments and public sector bodies in the coming months. Following the piloting phase, CDDO will review the standard based on feedback gathered and seek formal endorsement from the Data Standards Authority in 2022.
By publishing this information proactively, the UK government is aiming to promote trustworthy innovation by providing better visibility of the use of algorithms across the public sector, and enabling unintended consequences to be mitigated early on.
"In the AI Council’s AI Roadmap, we highlighted the need for new transparency mechanisms to ensure accountability and public scrutiny of algorithmic decision-making; and encouraged the UK government to consider analysis and recommendations from the Centre for Data Ethics and Innovation," said Tabitha Goldstaub, Chair of the UK Government’s AI Council. "I’m thrilled to see the UK government acting swiftly on this; delivering on a commitment made in the National AI Strategy, and strengthening our position as a world leader in trustworthy AI."
Publication of the standard comes after the UK government sought views on a proposal to introduce transparency reporting on public sector use of algorithms in decision-making, as part of its consultation on the future of the UK’s data protection regime. The UK government is currently analysing the feedback received.