A report from the Committee for Standards in Public Life, has called on the UK government to increase transparency in AI governance, and for ethics to be “embedded” in the framework surrounding the technology. The report argued that despite AI being used or developed in various sectors (healthcare, welfare, social care, policing, and immigration), the government doesn’t publish a centralized audit on AI use in the government or public sector. Now, the UK is calling for government transparency on AI use – let’s see what’s happening. 

What does AI know about you in the UK?

Most of what is known by the public regarding AI usage, is thanks to journalists and academics utilizing Freedom of Information requests, or going through tons of public procurement data. This is because public bodies are not taking any proactive steps to actually outline and release more information about how they use AI. 

The Committee for Standards in Public Life, argued that the public should be able to gain access to the “information about the evidence, assumptions and principles on which policy decisions have been made”.

In focus groups brought together for the review, various members of the public also expressed a desire for transparency. “This serious report sadly confirms what we know to be the case – that the Conservative government is failing on openness and transparency when it comes to the use of AI in the public sector,” Chi Onwurah, MP shadow digital minister, stated.

The government urgently needs to get a grip before the potential for unintended consequences gets out of control,” said Onwurah. 

Simon Burall, senior associate with Involve, a public participation charity organization, commented: “It’s important that these debates involve the public as well as elected representatives and experts, and that the diversity of the communities that are affected by these algorithms are also involved in informing the trade-offs about when these algorithms should be used and not.”

Do you think the UK has enough AI ethics?

The UK is calling for government transparency on AI use, but how is it already being used? 

Predictive policing programs are currently being used to target and identify crime “hotspots,” making various individual risk assessments, such as using algorithms to determine the likelihood of someone committing a crime. 

Now, Liberty, a human rights group, has pleaded with police to stop using these AI programs, as they reinforce existing social biases. The algorithms use inadequate data, and markers for race and ethnicity, which further perpetuates racism within the policing system. Liberty also said that there is a “severe lack of transparency” regarding how these techniques are actually used. 

The committee’s report further highlighted that the “application of anti-discrimination law to AI needs to be clarified”.


In order for AI to succeed wholly, there needs to be total transparency

government transparency on AI use

The UK is calling for government transparency on AI use, and frankly it’s about time. AI is programmed by people, and people have certain biases that can ultimately throw the technology into perpetuating the same discrimination. AI is used in healthcare, welfare, immigration, and policing (among other things) in the UK – all of which are sensitive, and require a level of engagement with the biases that are already present within the system – all of which can be further reinforced by non-transparent AI use.