AI, Employment and Ethics

Screen Shot 2019-09-26 at 12.01.33 AM

In this post I aim to describe some of the ethical issues around the use of algorithms to make or assist decisions in recruitment and for managing gig employment.

When discussing ethics we are trying to deduce the right thing to do (not the permissible thing, or the profitable thing, or the legal thing).

AI in recruitment

Consider Usha, who is a software engineer specialising in machine learning. Let’s imagine for the purposes of example, that she is in fact the most qualified and experienced person in the applicant pool for an advertised position, and would in fact perform the best in the role out of the entire applicant pool. In her application:

  • She uses detectably ‘female’ language in resume
  • She notes she didn’t start coding until the age of 18
  • She was the founding organiser of LGBTQ on campus

She also has a non-Western name and her dark skin tone made it difficult for an AI system to register her affect during a recorded video interview with a chat bot.

Faced with this data, an AI recruitment algorithm screened her out. She doesn’t get the job. She didn’t even get an face to face interview. Given the circumstances, many of us might think this was wrong.

Perhaps it is wrong because some principles such as fairness, or like treatment of like, or equality of opportunity have been transgressed. Overall, an injustice seems to have occurred.

Algorithmic Injustice

In his book Future Politics, Jaime Susskind lays out the various was in which an algorithm could lead to unjust outcomes.

  • Data-based injustice: where problematic, biased or incomplete data leads the algorithm to decide unfairly
  • Rule-based injustice
    • Overt: the algorithm contains explicit rules discriminating against some people, e.g. discriminating against people on the basis of sexual orientation.
    • Implicit: the algorithm discriminates systematically against some kinds of people due to correlations in the data, e.g. discriminating against those who didn’t start learning to code until after the age of 18 might discriminate against women due to social and cultural norms.
  • The neutrality fallacy: equal treatment for all people can propagate entrenched bias and injustice in our institutions and society.

Susskind notes that most algorithmic injustice can be traced back to actions or omissions of people.

Human Rights

Another way of formulating algorithmic ethics is in terms of human rights. In this case, rather than look to the outcome of a process to decide whether it was just or not, we can look to the process itself, and ask whether the applicant’s human rights have been respected.

In a paper titled, “Artificial Intelligence & Human Rights: Opportunities & Risks” The Berkman Klein Centre for Internet and Society at Harvard concludes that the following rights could be transgressed by the use of algorithms for recruitment. The rights to:

  • Freedom from discrimination
  • Privacy
  • Freedom of opinion, expression and information
  • Peaceful assembly and association
  • Desirable work

But it might be that case that ongoing ethical discourse could lead us to new rights in the age of AI, perhaps:

  • The right to not be measured or manipulated?
  • The right to human interaction?

Ethical Systems

The foundations for reasoning about rights transgressions or whether outcomes are just or unjust are found in ethical systems. Such systems have been constructed and debated by philosophers for centuries.

The Institute of Electrical and Electronic Engineers (IEEE) recognises this and in their (300 page!) report ‘Ethically Aligned Design’, they identify and describe a number of Western and non-Western ethical systems that might underpin considerations of algorithmic ethics. The IEEE notes that ethical systems lead us to general principles and general principles define imperatives. The IEEE list and explain eight general principles for ethically aligned design.

One example of an ethical system is what might broadly be considered the ‘consequentialist’ system, which determines right and wrong according to consequences. A popular version of this approach is utilitarianism, the ethical approach that seeks to maximise happiness. As an example, under utilitarianism, affirmative action can be good for society as a whole, enriching the experience of college students and enhancing representation in public institutions and making everyone happier in the end. This approach tends to ensure that we act ‘for the greatest good’.

Another example of an ethical system is deontology or ‘rules based’ ethics. Kantianism is a version of deontology, which argues that ethical imperatives come from within us as human beings, and the right thing to do boils down to ensuring that we treat all people with dignity, ensuring that they are never a mere means to an end but an end in themselves. This approach tends to lead to the formulations of right and duties. For example, it would be wrong to force someone to work without pay (slavery) because this fails to respect their freedom, autonomy, humanity and ultimately dignity, irrespective of the outcomes.

In their report, where the IEEE deduces their general principles of ethically aligned design from a foundation of these ethical systems, the authors note that, “the uncritical use of AI in the workplace, and its impact on employee-employer relations, is of utmost concern due to the high chance of error and biased outcome.”

The IEEE approach is not the only published declaration of ethical principles relevant to algorithmic decision making. The Berkman Klein Institute has catalogued and compared a number of these from public and private institutions.

The Gig Economy

Let’s turn now to gig work. Think of Rita, she is a gig worker working for a hypothetical home cleaning business that operates much like Uber. Rita’s work is monitored by GPS ensuring she takes the most direct route to each job, she’s not sure whether the company tracks her when she’s not working. Only time spent cleaning each house is paid, and the algorithm keeps very tight tabs on her activities. Rita gets warning notifications if she deviates from the prescribed route, such as when she needs to pick her son up from school and drop him to the babysitter. She gets ratings from clients, but one woman, an historical ethnic rival, always rates her low even when she does a good job, the algorithm warns her that it’s her last chance to do better. Rita stresses about the algorithm, feels constantly anxious and enters a depression. She misses work, has no sick pay to draw upon, and spirals downward.

We may conceive of such algorithms as ‘mental whips’ and feel concerned that when acting punitively they may be taking data out of context. Furthermore, the ethically appropriate response from the algorithm to, say, an older worker who falls ill might well be different from that to a wayward youth who slacks off. Justice may not be served by equal treatment.

Phoebe Moore has noted “[such] human resource tool[s] could expose workers to heightened structural, physical and psychosocial risks and stress.” – this is worse if workers feel disempowered.

Surveillance and the Panopticon

Many of the issues around gig management algorithms boil down to issues of surveillance.

Historic surveillance had limitations (e.g. a private detective could only investigate one employee at a time). However, with technological advance we can consider surveillance taken to its purest extreme. This is the situation Jeremy Bentham imagined with his panopticon. A perfect surveillance arrangement where one guard could observe all prisoners in a prison (or workers in a factory for that matter) at all times, without the guard being seen themselves. As soon as workers know this is the situation their behaviour changes. When a machine is surveilling people, people serve the machine, rather than machines serving people.

The panopticon is problematic for a number of reasons. Firstly, there is an unfounded assumption of innate shirking. There may be no right to disconnect (especially if the employer performs 24/7 surveillance of social media).

As with Rita there are risks that surveillance data can be taken out of context. We also know that the greater the surveillance, the greater the human demands for sanctions on apparent transgressions.

Finally, the system lacks a counter measure of ‘equiveillance’, which would allow the working individual to construct their own case from evidence they gather themselves, rather than merely having access to surveillance data that could possibly incriminate them.

Ethically we must ask, who is this situation benefitting? Employment should be a reciprocal arrangement of benefit. But with panopticon-like management of workers, it seems that some interests are held above those of others. Dignity may not be respected and workers can become unhappy. It could be argued that Rita is not being treated as an end in herself, but only as a mere means.

It’s true that Rita chose to work for the platform, and by choosing surveillance, has willingly forgone privacy. But perhaps she shouldn’t be allowed to. This is because privacy has group level benefits. A lack of privacy suppresses critical thought and critical thought is necessary to form alliances and hold those that exploit workers to account.

As a society we are presently making a big deal about consumer privacy, but what about employee privacy and protections? Ethics demands that we examine these discrepancies.

We might want to ensure that humans don’t become a resource for machines, where the power relationship is reversed, where human behavior (like Rita’s) is triggered by machine activity rather than the other way around. The risk is not that robots will take our jobs, we will become the robots living ultra-efficient but dehumanized lives

Ghost Work author Mary Gray says, “[one] problem is that the [algorithmic gig] work conditions don’t recognize how important the person is to that process. It diminishes their work and really creates work conditions that are unsustainable.” This argument contains both consequentialist and deontological points against overzealous algorithmic management of people.

Is there a duty to use algorithms?

I’ve called into question some of the possible uses of algorithms in recruitment and managing the gig economy. Potential injustice seems to lay in wait everywhere in bad data, implicitly unjust rules, and even neutral rules. But when are algorithms justified? What if customer satisfaction really is ‘up 13%’? Is this an argument for preserving the greatest happiness at the expense of a few workers? Or perhaps techniques for ‘ethically aligned design’ could lead to systems that overcome the ‘discriminatory intent’ in people and also enhance justice (dignity) in the process.

“We can tweak data and algorithms until we can remove the bias. We can’t do that with a human being,” – Frida Polli, CEO Pymetrics.

However, the duty to respect human dignity may require some limitations on the functions and capability of AI in recruitment and the management of gig work. We need to examine what limitations.

Australia’s Chief Scientist, Dr Alan Finkel, has proposed the ‘Turing Certificate’, a recognised mark for consumer technologies that would indicate whether the technology adheres to certain ethical standards. This discussion should be ongoing.

Finally, the irony that we implement oversight and regulatory force to combat the use of surveillance and algorithmic force is not lost on me…

Author: Adapt Research

Adapt Research provides high quality evidence-based medical, technical and academic research, writing and analysis services to universities, government departments, and private firms. I am available for large and small research projects, peer review, and medical writing assignments of any size

Leave a comment