UBI = freedom from welfare surveillance?

One of the most enduring framings of universal basic income (UBI) is as freedom-enhancing; freedom from poverty, from poor quality work and exploitative employers, maybe from work entirely, and freedom to shape our lives how we truly desire.

This extends to freedom from punitive social security and benefits systems. The UK has for decades been upping the level of conditionality and scrutiny for those applying for and receiving benefits linked to unemployment. Proving your health condition or disability is severe enough to make you eligible for ESA, showing you have searched for work for the required number of hours, or applied for the mandated number of jobs, justifying why you can’t work more than part time hours. All of these hoops (notably all of them are for the applicant/ claimant to jump through) are stressful, bureaucratic, dehumanising and ineffective to varying degrees. But while they are not entirely transparent or comprehensible, the decision-making processes can broadly be explained, and, crucially, involve human beings.

More concerning is the introduction of big-data-driven and automated/ algorithmic decision making in social security and welfare systems worldwide. These take intrusive and opaque to new levels. An automated jobseeker profiling system in Poland has been criticised for being opaque, unfair and offering very limited opportunity for jobseekers or staff to understand or challenge decisions. If staff are unable to properly understand how people are categorised, the potential for stereotyping and discrimination to go unchecked is a real risk. In Sweden, automated decision making, using data from multiple sources, appears to be primarily concerned with reducing the number of people on benefit rolls, without claimants being properly informed about how decisions are made. Without transparency, claimants may be worse off but have no way to challenge decisions. These decisions may also be made using inappropriate data or flawed assumptions, but there is no way to tell.

In Belgium, the Flanders employment service has been looking into developing a ‘recommendation’ system, analysing individuals’ data to come up with the most suitable jobs. This risks stereotyping jobseekers, and reproducing existing biases and inequalities, for example showing women lower paid jobs than men, whilst obscuring the methods by which the recommendations are made. They have also looked into analysing online behaviour to gauge whether claimants are searching hard enough for work - if not, they could be penalised. There is an obvious risk of penalising people unfairly and unequally, and potential privacy risks. A national algorithmic fraud detection system in the Netherlands has already been halted by the courts on the grounds of unfair targeting of low income communities .

In a universal, unconditional benefit system like UBI, with no behavioural conditions attached, no mandated job-seeking, and no need to reduce claimant numbers, none of the examples just described would exist. If designed with data privacy and autonomy in mind, a UBI could require very little beyond a way to verify someone’s identity (easily done through existing means) and, depending on the model, the existing tax system.

A UBI could free us from the increasing digital surveillance being introduced in welfare systems worldwide, and the associated lack of accountability and transparency. As we already know, automated / algorithmic decision making tends to be biased against those already discriminated against or otherwise disadvantaged, so it could make a significant contribution to fairer outcomes from digital and data-driven systems.

(c) Anna Dent 2020. I provide social research, policy analysis and development, writing and expert opinion, and project development in Good Work and the Future of Work / In-Work Poverty and Progression / Welfare benefits / Ethical technology / Skills / Inclusive growth

Anna Dent