Work, Welfare, Technology and Power

This review of Kate Crawford’s Atlas of AI (not out yet in the UK so I’m just going off the review!) describes how the book situates AI in the real world. She highlights its physicality, for example the environmental and social impact of the extraction of lithium that sits inside all our computers. In the accompanying interview she also explains how framing AI narrowly as a technical domain in which problems can always be fixed with a technical solution has obscured wider problems of injustice and democracy. 

Exploring the sector as an ‘outsider’ (I come from a social science background), these issues have always been at the forefront of my mind. My knowledge about the inner workings of algorithms, predictive analytics or facial recognition is limited, but I can see how they pose serious threats to fairness and freedom if used in the wrong way.

As Crawford says, at their heart, these technologies and the debates surrounding them turn on issues of power. Power to know, power to do, and power to exercise rights. In so many examples of the application of new technologies in the world of work and welfare, existing power dynamics are simply reinforced. 

Management systems and managers track workers’ activity

Algorithmic management, where day to day management duties such as allocating shifts and tasks are performed by algorithms or other automated systems rather than human line managers, consolidates power in the hands of business owners and managers. Through monitoring and surveillance, management systems and managers can track workers’ activity.

Decisions about things like shift patterns or performance reviews can be fully or partly automated using this data, giving workers little or no explanation or recourse to challenge decisions. The basis on which decisions are made, and the underlying data, are hidden from workers, who have no power to know. 

the power to challenge these decisions is all but non-existent

Decisions on eligibility for welfare benefits, monitoring compliance with benefit rules, and fraud detection are all increasingly datafied and automated, doubling down on already opaque and often discriminatory systems in which claimants have very few rights. Understanding why and how decisions are made is problematised when even staff can’t explain how automated systems work; the power to challenge these decisions is all but non-existent for those on the receiving end. 

new technologies reinforce existing power structures

The use of facial recognition and emotion detection in recruitment reinforces biases which reflect existing power imbalances. There is evidence that emotion recognition software categorises the faces and expressions of people of colour more negatively than white faces; people with disabilities may be screened out if they do not conform to ‘normal’ ways of expressing themselves; if a model is trained using existing examples of ‘successful’ hires, these are likely to be skewed towards groups who enjoy a more privileged position through their race, sex or disability status. But if the company you want to work for (or the job which you are obliged to apply for through poverty or benefit conditions) uses these technologies, you have no power to opt out. 

These are just three examples of the ways in which new technologies reinforce existing power structures, threaten rights and undermine autonomy for workers and those interacting with benefits systems. Until we acknowledge these existing power dynamics, and the role of technologies in strengthening them, we are likely to see repeated examples and further injustices. 

Enjoyed this? Read more from my blog or other publications.

(c) Anna Dent 2021. I provide research, writing, expert opinion and project development in Good Work and the Future of Work / In-Work Poverty and Progression / Welfare benefits / Ethical technology / Skills / Inclusive growth

Anna Dent