Re-humanising the machine
Arthur Piper, IBA Technology CorrespondentTuesday 15 December 2020
When algorithms used by businesses or government agencies make unintentionally biased decisions, the impact on people can be profound. Global Insight assesses what can be done to re-humanise these decisions, and to provide explanations of the outcomes to those affected.
In 2020, the UK government chose to use a moderating algorithm to decide the outcome of A-level results for students unable to sit their annual exams due to Covid-19. The project quickly ran into problems. Around 40 per cent of students had their teacher-assigned grades for exams lowered, in particular, those from state-run schools. On the other hand, only an additional 2.4 per cent of students achieved the highest A or A* grades compared to 2019, according to exam regulator Ofqual.
Darren Ngasseu Nkamga had his triple-A* grades lowered to two Bs and a C, and missed out on his place to study medicine at Imperial College London, where he had been awarded a scholarship.
Bias baked in
‘BAME students like me have been impacted significantly more by this algorithm, because we’re essentially in a more disadvantaged situation’, says Nkamga. ‘There’s a higher proportion of BAME students in “bad” postcodes who go to schools which don’t perform as well, and so when moderated by historical profiles, BAME students will suffer more because teachers can’t give them the grades they deserve, despite their academic potential.’
While the government subsequently revised all of the grades moderated by the algorithm to reflect teacher assessment, for many, the damage had already been done. Their university places had been reassigned to other students.
The episode is a striking example of what happens when algorithms make unintentionally biased decisions – and when the outcomes of those choices cannot adequately be explained to those affected. While such bias in computer programs has been known about for some time, the sheer variety and scale of artificial intelligence (AI) applications, as well as their increasing complexity, has made tackling the issue of clear explanation more pressing than ever.
As more decisions are embedded into the decision-making landscape of our society – from exam moderation and the social media ecosphere, to facial recognition technologies – cases of unfair bias are potentially affecting more and more people.
Re-humanising decisions
In fact, so-called explainable artificial intelligence (‘XAI’) has been the subject of intense research during 2019-20, both within the computing community and among regulators and businesses. This is because programmers often create effective machine learning systems – that is, the program teaches itself – without understanding how those routines come to the decisions they do: the so-called black box problem. Nor can they accurately predict how programs will behave when launched in the real world – as Microsoft discovered when it trialled its ‘friendly’ chatbot Tay on Twitter, only to find it repeated and generated racist slurs and offensive language.
Attempting to re-humanise machine-powered decision-making is no easy task, as Alejandro Barredo Arrieta and colleagues explain in their December 2019 review paper on the issue, Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. One key problem is that transparent AI is currently much less efficient than XAI. Adding in explainability creates friction in the process that could actually degrade the quality of decisions the program makes – to put it very simply.
Ironically, the more one can explain an AI-generated decision, the more in need of explanation it may need to be. Yet, say the report authors, responsible AI requires that humans ‘understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners’.
The solution is complex and seems a long way off, especially as the concepts of trustworthiness, fairness, confidentiality and privacy are not clearly defined in the sector. Arrieta suggests that many organisations are likely to continue to use black box systems while bolting on explanatory systems around them.
Non-human intentions
This lack of transparency in how AI systems reach decisions is not just a technical and social problem. It is also a legal one. As Yavar Bathaee, a partner at Bathaee Dunne in the United States, wrote in the Harvard Journal of Law & Technology in Spring 2018: ‘The implications of [the] inability to understand the decision-making process of AI are profound for intent and causation tests, which rely on evidence of human behaviour to satisfy them.’
The lack of transparency in how AI systems reach decisions is not just a technical and social problem. It is also a legal one
Since prosecutors and others can interview or cross-examine human decision-makers or examine the trails of evidence left behind in emails, letters and memos, they can build up a picture of intent and causation.
‘The AI’s thought process may be based on patterns that we as humans cannot perceive, which means understanding the AI may be akin to understanding another highly intelligent species – one with entirely different senses and powers of perception’, Bathaee concluded. ‘This also means that little can be inferred about the intent or conduct of the humans that created or deployed the AI, since even they may not be able to foresee what solutions the AI will reach or what decisions it will make.’
This raises important questions about liability for when things go wrong. And the issue of responsibility cuts across many legal areas – from antitrust and data privacy to anti-discrimination.
A right to explanation
Regulators are not prepared to wait for the deep problems of XAI to be solved. For example, recent guidance from the UK Information Commissioners’ Office (ICO) – entitled Explaining decisions made with AI – is designed to help organisations understand their responsibilities for explaining to people how automated decisions are made.
Working with the Alan Turing Institute, the ICO’s detailed blueprint covers everything from basic explanations to how organisations can implement such a process. The primary legal framework comprises of the EU General Data Protection Regulation and the UK Data Protection Act 2018, which covers any personal data used by AI systems to ‘train, test or deploy an AI system’.
The Equality Act 2010 is also relevant for the types of explanations organisations could be compelled to provide. But there are many other areas of law mentioned by the ICO, including the right to challenge government decisions that are made using AI through judicial review, which could have been the case with the A-level exam results fiasco.
‘How can you show that you treated an individual fairly and in a transparent manner when making an AI-assisted decision about them? One way is to provide them with an explanation of the decision and document its provision’, says the ICO guidance. ‘You need to be able to give an individual an explanation of a fully automated decision to enable their rights to obtain meaningful information, express their point of view and contest the decision.’
The ICO’s message is that if you could reasonably expect to receive an explanation from a human about a decision that they have made about you, you should expect the same courtesy from an AI-enabled system too
The message is that if you could reasonably expect to receive an explanation from a human about a decision that they have made about you, you should expect the same courtesy from an AI-enabled system too.
The bias embedded in such programs will not disappear overnight, but such provisions could help organisations detect or eliminate bias earlier. Adding bolt-on systems could explain decisions made by black box AI – or those made by AI and humans together. Most importantly, it clearly spells out the fair and unbiased treatment humans can expect from their machine intelligences.
Arthur Piper is a freelance journalist. He can be contacted at arthurpiper@mac.com