For ethical AI, get a social scientist

Two recent articles, by Hannah Kerner and Carly Kind, provide us with some pretty clear insight as to why we are seeing more and more push-back against automated decision-making in public services.

Kerner describes how ‘real-world’ applications of AI and machine learning are seen as marginal within the AI research community. Most prized are novel methods that push the boundaries of what the technologies can do, but in the abstract, rather than addressing real problems. Not only does this mean that the potential for these ground-breaking methods to drive improvements in society is untapped, but also that when AI, machine learning and algorithmic decision-making are applied to social issues, they often seem to go badly wrong.

Kind, in her excellent primer in ethical AI, discusses how the field is only now getting to grips with the impact of AI on the ‘real world’, rather than focusing on developing guidelines for the sector, or overly technical ‘solutions’ to issues such as biased data. Until now, she argues, there has been a lack of recognition of AI as entirely grounded in, influenced by, and replicating, the political, cultural and social landscape in which it is developed and operates.

Automated decision-making, surveillance and predictive tools are sold as clean, efficient and fair ways to operate public services, reducing bureaucracy and costs, using neutral data. In fact, these systems are far from neutral; they are subject to the same ideological influences and biases as any other policy instrument, and replicate the same power imbalances that already exist in areas such as welfare benefits. As Virginia Eubanks says, these systems do not arrive from outer space, unsullied by politics or bias, they are entirely shaped by the systems they were built in. They are also layered on top of existing flaws, for example the UK’s Universal Credit benefits system, which was recently noted by a House of Lords Committee as being ‘based around an idealised claimant’ and therefore already ‘[harms] many, particularly the most vulnerable’ . To implement automated decision making and AI on top of a system already fundamentally flawed, as DWP is known to be doing, seems to be a recipe for disaster.

When the promise of efficient and fair technological solutions meets the reality of the outside world, things do not pan out as planned, leading to some notable u-turns. Some UK local authorities are now appreciating the gap between what is promised and what is actually delivered, and rowing back from using predictive technologies.

Tech companies are unlikely to be best placed to understand the social and policy issues that their tools might be used to ‘solve’, or to test them and understand the impact of the results in the context they will operate in. Building a tool that you think will be ethical is not the same as it operating ethically in the messy and complex real world. It appears that designers and engineers prioritise a beautiful technical solution over one that delivers ‘good’ outcomes. But where is the motivation for them to care about or understand the problems they are building tools for?

This is where dialogue and partnership have to come in, and the need for social scientists and public consultation in the development and deployment of these tools becomes clear. The flaws and social consequences need to be identified and mitigated before systems are rolled out, and in some cases this should mean that they never see the light of day.

Social science can operate on two levels to drive a more ethical approach to technologies. Firstly, it can interrogate whether the way that technologies operate is fair and just, for example is the data being used to make decisions on welfare biased, does it replicate power imbalances, will it lead to fairer decision-making? But it must also consider whether the outcomes that the tool is being used for are just and fair, for example is an AI system being used to find ways to deny more people access to welfare support? Social science academics, researchers, policymakers and activists must go beyond considering whether a technology can be better designed, to actively questioning whether the purpose of the technology is fair and just, and to think about how these tools can be used to improve the status quo, rather than maintain it.

(c) Anna Dent 2020. I provide social research, policy analysis and development, writing and expert opinion, and project development in Good Work and the Future of Work / In-Work Poverty and Progression / Welfare benefits / Ethical technology / Skills / Inclusive growth

Anna Dent