Interview: Ofgem Head of Process Digitisation on AI ethics

As the AI hype shows no sign of slowing down, preventing data biases from becoming entrenched in government policy and operations is a crucial challenge to keep in focus.

Listen to 'Interview: Ofgem Head of Process Digitisation on AI ethics'
5:22

Addressing the potential solutions to this issue is Praveen Tomar, the Head of Process Digitisation (Data and AI) at the Office of Gas and Electricity Markets (Ofgem). As someone focused on optimising processes and automating data to enhance business efficiency, boosting ROI, and streamlining operations, Tomar is someone deeply aware of the quick wins of AI, but also the structure and support needed to manage its risks.

The growing role of AI- and the risk of bias

AI is an increasingly critical technology for Ofgem’s operational efficiency and Tomar’s understanding of AI use in the public sector has supported his view that greater infrastructure is required to enhance the representation that lies behind AI systems, particularly among groups that are vulnerable and that may be subject to discrimination. 

Tomar underscores that “We live in a world where data used to inform mainstream AI models contains imbalance and bias. With 2.6 billion people disconnected from the internet, their experiences of the world are inevitably excluded from all kinds of databases that power many machine learning systems and training data."

Countering misinformed stereotypes that emerge from unrepresentative group data in the context of policy planning necessitates direct governance intervention to mitigate potential harms, including but not limited to, marginalising groups from critical social welfare and state support.

To tackle this problem, Tomar advocates for a multi-stakeholder approach that democratises AI ethics through transparency and public consultation.

Transparency and explainability

On transparency, Tomar suggests going back to basics. "We often take for granted that when we say ‘Data’ or ‘AI,’ everyone is on the same page, but we know this is not the case. Older demographics, both within the Civil Service and the general public, are often sidelined from the technology that continues to play a critical role in their everyday lives.

Before aspiring towards representation and participation, we must all get on the same page regarding proper representation in all forms of data, especially from as many ethnic groups as possible. To achieve this, we need AI models that are explainable and interpretable,” Tomar stressed.

As a recommendation, Tomar suggests that “data and AI literacy programmes should be instituted, not just for public organisations but for citizens too”, with literacy being the biggest gatekeeper to citizen buy-in and the comprehensive ethical evaluation needed to ensure that AI systems support all UK citizens.

Tomar highlights that public sector organisations should publicly, and thoroughly document AI models with details about training data model architecture, decision-making processes and how citizens’ personal data is used and protected against sensitive information.

Tomar added, “Interpretable models (explainable AI) should be used whenever possible and for complex models such as deep neural networks, LIME or SHAP should explain the local and global interpretability of the model to break down the explainability barriers to the end users or consumers of that technology.”

Regular audits allow for the iteration of ethics behind AI decision-making processes to correspond with changes in technology, public opinion and policy context, he added. 

Public consultation

Through instituted data and AI literacy programmes, civil society, academics and the society at large can gain a more cultivated and informed view of the AI systems that influence their lives. Tomar argues that algorithmic transparency should be in place to allow broader deliberation on the ethics that drive decision-making. 

“Large-scale transformation projects that use AI, such as autonomous vehicles in cities, or how AI is being rapidly used to shortlist job candidates, should not move ahead without stakeholder consultation and human confidence,” Tomar pointed to.

Databases that exclude the knowledge and data of particularly vulnerable and/or marginalised demographics will have adverse effects on AI-enabled services meant to support people from those groups. Tomar underscores that creating mechanisms for these groups to provide diversified feedback and consultation on the governance, ethics and logic behind AI decision-making as a necessary counterbalance to the exclusionary nature of public datasets. 

Innovation and ethics

With cyber-attacks on the increase, Tomar observes no trade-offs between AI innovation and ethics, but rather as a necessary unit. “Robust in-house processes for protecting citizen’s data is a must. Personal identifiers including names, addresses, gender and race ought to be anonymised for predictive analytics so that in cases of data leaks, the integrity of citizen’s personal data is kept intact and is unexploitable for cybercriminals,” Tomar proposed. 

Note: these are Praveen Tomar's personal views and not the endorsement from his organisation.

 

Also Read