An Australian politician’s take on populism and existential risks

What's the Worst That Could Happen? | The MIT Press

In this post, I summarise and review the book What’s the worst that could happen?: Existential risks and extreme politics – by Australian politician Andrew Leigh (published 9 Nov 2021)

TLDR the TLDR: Populism enhances x-risks

TLDR: Andrew Leigh argues that the short-sighted politics of populism has enhanced existential risk. Populism is driven by problems of jobs, snobs, race, pace and luck. We can control populism by strengthening democratic systems and being stoic. However, Leigh says little about addressing the causes of populism itself. 

Intro and purpose of the book 

Australian Labour Party representative Andrew Leigh used his time during the Covid-19 disruptions to write a short book on existential risks to humanity (x-risk).

In the book, Leigh leverages longtermist thinking, describes the various x-risks (ie extreme pandemics, climate change, nuclear war, artificial superintelligence, environmental degradation, asteroid/comet impact) and couples this with observations about the rise of populist politics.

He concludes that not only does populism raise the probability of totalitarian dystopia, but it undermines our ability to prevent and mitigate x-risks generally. 

In what follows I outline his approach chapter-by-chapter and conclude with thoughts of my own. 

Why the future matters

The first chapter starts from the premise that a future utopia is inevitable if humanity survives long enough. This claim emerges by extrapolating a historical trajectory from our tough Iron Age existence through to modern comforts, and then beyond. Leigh rejects discounting the value of future lives, ‘discounting at a rate of 5 percent implies that Christopher Columbus is worth more than all 8 billion people alive today’ (p.8). Similarly, our lives today are no more important than those of future generations and we have a responsibility to those generations. 

Leigh notes, however, that our survival is not a given and points out that someone’s risk of dying from an extinction event is higher than many other common risks. He provides psychological (availability heuristic) and economic (campaign contributions) reasons why policy is biased against preventing human extinction. 

This situation is exacerbated by the rise of political populism. Leigh defines populists as those who claim to represent ‘the people’ in a challenge to ‘the elites’ who are painted as dishonest or corrupt. Populists can represent the left or right of the political spectrum, have little respect for experts, and tend to champion immediate priorities. They are ‘drivers distracted by back seat squabbles’ (p.14). 

Leigh’s use of accessible metaphors involving bar fights, stolen wallets, and football make this introduction simultaneously an easy read, and the most Australian work on x-risk to date. 

The next five chapters survey the threats from biorisks, climate change, nuclear war and artificial superintelligence as well as a chapter on probability and risk. Leigh describes each risk and canvasses a suite of standardly recommended policy options specific to them. For those not familiar with such background the chapters are an easy introduction to x-risks. 

‘For each of the existential risks we face, there are sensible approaches that could curtail the dangers. For all the risks we face, a better politics will lead to a safer world’ (p.15). 

Biothreats

Biological threats, both naturally occurring and human-created, are the focus of Chapter 2. Leigh uses historical examples (plague, cholera, pandemics, biological attacks), tabletop simulations (Clade X, Event 201), and popular culture, ‘someone doesn’t have to weaponize the bird flu–the birds are already doing that,’ says Lawrence Fishburne’s character in Contagion (p.22). Leigh calls out biological weapons programmes, those that skirted the law (the Soviet Union, Saddam Hussein) and acquaints the reader with the risks of synthetic biology. 

Repeatedly, Leigh refers to popular fiction to make his points. Anecdotally, biotech entrepreneur Craig Venter recommended the book The Cobra Event, to Bill Clinton and this influenced biorisk policy.

Leigh highlights Scott Galloway’s observation that since more people die from disease than war, it might be reasonable to trade the CDC’s $7 billion budget for the Pentagon’s $700 billion budget (this made me reflect on general criticism of the military-industrial complex as a driver of runaway military spending, based around lobbying and vested interests. One possible future sees Boeing and Lockheed Martin coaxed into healthcare technology, generating the same revenues but with new focus). The chapter ends with a catch-all summary of previously published strategies for minimizing biothreats. 

Climate change

Chapter 3 surveys the issue of climate change. There is only so much that can be said in 21 pages (compare the 40,000 people who attended COP26 recently, and the gigabytes of documents flying around). It is noted that likely increases of 3–4 degrees C will be experienced by 2100, but importantly modelling suggests a 10 percent probability of 6 degrees C. If various ‘tipping points’ and ‘carbon bombs’ (p38–39), lead us to this unlikely but possible destination, then things could get very bad. 

In this chapter Leigh really starts to escalate his hints that populist politics is exacerbating x-risk. He cites the actions of Donald Trump and Jair Bolsonaro. However, Leigh also notes that mitigations against climate change don’t hinge on longtermism, there are here-and-now economic reasons to act, as well as common ground across the political spectrum such as tree planting, and efficiency standards. 

Nuclear weapons

Chapter 4 conveys the precariousness of the nuclear stalemate. We read the stories of close calls such as the Cuban missile crisis and the role of Vasili Arkhipov in averting nuclear disaster. Tales from Dr Strangelove introduce us to mutually assured destruction (MAD) and the Russian ‘Dead Hand’ retaliatory mechanism.

Leigh argues that the increasingly many nuclear powers make it mathematically more likely there will be conflict. Nuclear conflict could also begin if a terrorist nuclear attack gave the appearance that a nuclear power had launched a strike. These scenarios could lead to a possible nuclear winter and agricultural failure. Again, the actions of populists such as Trump (withdrawing from the Iran nuclear deal) tend towards further destabilisation. 

A survey of actions to minimize nuclear risk is given, with Leigh advocating a ‘Manhatten Project II’ (p.72), to denuclearize the world. Although, there is no mention of the economics of the nuclear weapon industry and how vested economic interests might sustain the number of weapons. However, Leigh wittily notes that, ‘By sending Dead Hand to the grave, Russia would make the planet a safer place’ (p.71). 

Artificial Intelligence

The chapter on AI again follows fairly standard exposition of the risks that superintelligence poses. Hooking the reader into the looming power of AI with further Aussie-as statements such as, ‘playing chess and Go against machines, humans have about the same chance of victory as a regular guy might have of winning a boxing match against Tyson Fury’ (p.76). We read about the ‘intelligence explosion’, the ‘control problem’, and the ‘King Midas’ problem. True to form as a former professor of economics and now politician Leigh notes that, ‘the problem of encoding altruism into a computer is akin to the challenge of writing a watertight tax code’ (p.78). 

Among several possible strategies for mitigating the risk of AI, Leigh introduces Stuart Russell’s notion that programmers should focus on, ‘building computers that are observant, humble and altruistic’ (p.84). Such machines would consult humans to learn what we want. As I’ve noted in a previous post, the problem here seems to be engineering the humans not the machines. In the end, Leigh suggests we may need enforceable treaties to ensure embedding of human values, banning of autonomous killing, and promoting collaboration over competition to manage the emergence of great intelligence, not if, but when it occurs. 

What are the odds?

Chapter 6 consists of a brief survey of other x-risks, as well as analysis of their probabilities when compared to a range of common risks. It is in this chapter that I felt like Leigh made two mistakes. 

Firstly, he expresses the probability of the risks discussed in Chapters 2 to 5 and other risks such as comet/asteroid impact, supervolcanic eruptions, and anthropogenic degradation of the environment, in terms of probability across a century, but then compares these to the ‘reader’s risk of dying from X, in the next year’. This comparison of apples and oranges is flawed for two reasons: 

  1. The probability of the anthropogenic risks, in particular, is non-stationary. For example, the risk of AI killing us all is not 1 in 1000 next year (as 1 in 10 ‘this century’) would imply. It is far lower at present, but likely to rise far higher, per annum, across time (until we control it or succumb to it). This reasoning does not necessarily apply to the natural risks, although see posts such as this one arguing that volcanic risk is rising.
  2. Psychologically, it makes more sense to compare per century risks of catastrophe with per lifetime rather than per annum risks to personal wellbeing. 

The second mistake I felt Leigh made is to introduce a hodgepodge of risks that have a greater probability in his estimation (which is based on Toby Ord’s), than some of the risks to which he has devoted entire preceding chapters. For example, cascading ecosystem failures appear to have a higher probability of causing human extinction than nuclear war. Arguably, there is some internal inconsistency of communication and emphasis in the book (though I certainly recognise nuclear threat as major). 

As an aside, I’ve always found it interesting that the number of humans killed by ‘all natural disasters’ (eg flood, storm, earthquake, volcano, tsunami), as reported by Our World in Data, is approximately 60,000 per year, whereas crude annualization of x-risk probabilities as gestured toward by Leigh (though not made explicit in the book) suggest that the following risks are all greater in expected fatalities than all natural disasters combined: unaligned AI (8 million annualised deaths in expectation), engineered pandemic (2.7 million), nuclear war (80,000), climate change (80,000), environmental damage (80,000). This seems to be another CDC vs Department of Defence situation. Where, really, should we be funnelling our resources?

Regardless of the rhetorical method, Leigh makes it clear that some unlikely risks, such as asteroid/comet impact are already receiving significant government attention. This, coupled with the argument I’ve just outlined, certainly indicates that national risk assessment (NRA) and national risk register (NRR) methodologies may not be fit for purpose.

The crux of the book

Leigh next turns to populism and its relationship to totalitarianism, a threat recognised by the x-risk community because it could permanently curtail the potential of humanity. 

We should probably pay more attention to these two chapters, and the penultimate one on democratic systems. This is because Leigh is a politician with over a decade of experience in the Australian House of Representatives. Therefore, we might infer some insider insight into the machinations of power and government, and it would be wise to grant him the floor to express his concerns. 

The populist risk

In the preceding chapter, Leigh began to hint at the risk of totalitarianism, but it is here in Chapter 7, the longest chapter, that this threat comes to the foreground. Reiterating the rise of populist politics in recent years, we are reacquainted with the populist’s quest to foster a conflict between, ‘a pure mass of people and a vile elite’ (p.103). Populists can arise from all sides of the political spectrum. Conceptually there are four quadrants: 

 Left (equality)Right (liberty)
InternationalistInternational egalitarians (Obama, Biden)International libertarians (Romney, Bush)
PopulistPopulist egalitarians (Sanders, Ocasio-Cortez)Populist libertarians (Trump, Palin) 

Leigh claims that certain ideas have historically been particularly contagious. These include communism, capitalism, and populism. Furthermore, the world is becoming more populist in recent decades. 

According to Leigh, there are five causes of this recent resurgence in populist politics:

  1. Jobs: low quality employment, work and wage insecurity
  2. Snobs: party elites who don’t take the populist threat seriously 
  3. Race: fear of difference, impressionable masses responding to racist rhetoric 
  4. Pace: rapid, disorienting, technological and cultural transformation
  5. Luck: some elections are very close and could have gone either way

On the issue of ‘snobs’, it certainly appears to me that elections have been stolen, but not as Trump claims, it’s by populists, from inflexible snobs! – see also Thomas Piketty’s arguments about the Brahmin Left and Merchant Right, left wing politicians now appeal to an educated class, rather than their traditional union base. 

With respect to ‘luck’, I’m left wondering if this is a one-way valve. Do many populists try their hand, but only a few succeed by luck? However, once they then have power, they tend to rig the system to maintain their control. 

Populism allegedly poses a threat to longtermism (and by implication x-risk mitigation), because populists:

  • reject strong science
  • reject effective institutions
  • reject global engagement
  • reject a sense of cooperation

However, the fact that populists are ‘anti’ these four things is the reason for their electoral appeal! Leigh proceeds to provide numerous examples of populists exhibiting these traits. Anti-internationalism in particular undermines hope of addressing x-risks, which by their nature may require global cooperation. Leigh argues that, therefore, the risk populism imposes on our future is greater than the risk it poses now. 

To reiterate, Leigh is arguing that five issues (jobs, snobs, race, pace, luck) have led to the rise in populism and that populism increases the risk from x-risks, due to its short-term focus and ‘anti’ emphasis. 

Totalitarianism – the death of democracy

In Chapter 8, we see the potential horror of widespread totalitarianism, which might emerge from the increasing hold that populism has around the world. Leigh says that widespread totalitarianism is, ‘not among [Toby] Ord’s top concerns, but does rank in mine’ (p.96). However, to me there is some equivocation over ‘totalitarianism’ in the works of Ord (The Precipice) versus the present book, with Ord focusing more on x-risk (permanently curtailing the future of humanity), whereas Leigh is more concerned with terrible national or regional states of affairs. 

We are led through the histories of Adolf Hitler, Hugo Chavez, and Ferdinand Marcos, in a search for commonalities. What we discover is that these populists initially came to power through fair democratic processes, before devolving their democracies into authoritarian regimes. Recently, in Hungary and Turkey populist outsiders similarly won elections (due to jobs, snobs, race, pace, and luck) then used their acquired power to attack institutions, often in imperceptible steps. 

Leigh catalogues ‘seven deadly sins’ (p.138) indicating that a leader is degrading democracy into an authoritarian regime. However, I felt that the inherent tension between the need for totalitarianism to be global for it to ‘permanently curtail the potential of humanity’ and the inward-looking nationalism of populists, seems to preclude populism as a pathway to a global totalitarian x-risk, although things could still get very bad, and populism can no doubt amplify other risks. 

Fixing Politics

Leigh’s roadmap to fixing politics is probably the most disappointing aspect of the book. This is because having identified populism as a driver of x-risk and having identified five causes of populism, he then focuses his remedy on building stronger democracies (presumably to resist degradation by populists) rather than focusing on the underlying causes. Populists win elections in strong democracies and then re-jig the rules to suit themselves. So it’s not so clear how creating better rules is the ultimate solution. 

Regardless, we are offered a suite of sensible democratic reforms, tinkering and strengthening of democracies particularly where the necessary continuous tinkering and strengthening has ceased (the last US constitutional amendment favouring democracy was 50 years ago). 

Leigh favours mass participation in elections, promoting the compulsory Australian system. He favours convenient voting methods, independent redistricting, and reform of electoral-college systems where the popular vote may not determine who wins. Leigh also favours controls on the export of technologies that can sustain totalitarianism, such as facial recognition systems. 

Many of Leigh’s proposals are positive steps, and I support a number, but none of them cut to the heart of the issue, which is the need to address jobs, snobs, race, pace, and luck so that populists cannot win free and fair elections in countries with strong democracies. In only one paragraph on p.153–4 (of 167) does the argument link proposed reforms to these five causes, and only two causes are addressed. 

Finale

A final chapter summarises the argument with reference to risk and insurance. X-risks are not a concern because they are likely, they are a concern because they would be unbearable. This is why we need insurance against them. To overcome the tendency for politicians to focus on the likely rather than the devastating, we must resist populism, strengthen democracy, and practice more wisdom. In Leigh’s mind, these traits are synonymous with the philosophy of Stoicism and the political philosophy of John Rawls. We need courage, prudence, justice, and moderation. 

‘A stoic approach to politics means spending less time caught up in the cycle of outrage and devoting more energy to making an enduring difference’ (p.164)

Perhaps it is through Stoicism that we address jobs, snobs, race, pace, and luck. 

Conclusion 

I enjoyed this book, but mostly because it was such an accessible summary of things I’d already learned during seven years’ engagement with x-risk content and the x-risk community. The book provides an accessible, entertaining introduction to x-risks for those not already immersed in the field. It benefitted from the simple style, amusing metaphors and Australianisms. 

On the other hand, the solutions proposed, in my view, don’t really address the problems identified. We are offered an ambulance at the bottom of the cliff. Or perhaps a more Aussie take might be that we are advised to piss on our houses to protect them from a bushfire. 

However, we often imagine that politicians are unreceptive to long-term issues. What’s the worst that could happen? demonstrates that there are representatives sympathetic to the issues of x-risk and future generations. We can try to leverage these politicians, amplify their voice, and connect such individuals with x-risk academic work via policy work on x-risks. Relevant examples of such work includes: 

I personally also think it is important to continue to develop arguments that demonstrate why x-risk is a priority here and now, not merely through a longtermist lens. Then we can cast the net as widely as possible, and convince those who will never be focused on the long term. Arguments that highlight flaws in national risk assessment and national risk register processes, and remedy these so that x-risk is rationally included under the extant scope of these devices are valuable, as are cost-effectiveness analyses grounded in the same short-termism standardly deployed in government. My back of the envelope calculations indicate this is an approach ripe for elaboration. 

However, the work should extend beyond specific policies addressing x-risks separately or in combination, to ways we can strengthen democracies, and perhaps more importantly, reduce the likelihood of populist leaders emerging in the first place by addressing the issues of jobs, snobs, race, pace, and the role of luck. I was disappointed that What’s the worst that could happen? didn’t finish this analysis, yet to be fair, this is a book that you can read in a day and is a worthy introduction to x-risk for the uninitiated. 

Future work and funding

Australia has a relatively new ‘Commission for the Human Future‘ and I have been in favour of similar initiatives in New Zealand.

Earlier in 2021 I unsuccessfully applied for funding to drive such an initiative and you can read my application to the Effective Altruism Infrastructure Fund here.

I’m very interested to talk with anyone who might like to collaborate on, or fund, such a project aimed at understanding and reducing x-risk from a New Zealand perspective.

To support more x-risk content on this blog, please consider donating below:

One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

NZ$1.00
NZ$5.00
NZ$10.00
NZ$5.00
NZ$15.00
NZ$100.00
NZ$5.00
NZ$15.00
NZ$100.00

Or enter a custom amount

NZ$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Optimism about the Future of Humanity: a conference on existential risks

In 1826 Mary Shelly crafted a vision of humanity’s end in ‘The Last Man’. Depicting a world that persists, indifferent to the demise of our species. The end came at the hands a pandemic, spread by the human technologies of trade and news.

Since the construction of nuclear weapons in 1945 humanity has wielded technological power of extreme destruction, and expert consensus is that the greatest threat to humans are humans themselves.

But given that we are the threat, there is also cause for much hope. Humans are self-reflexive and can change behaviour. Technology has raised the standard of living and human wellbeing worldwide, has provided the tools to escape the Covid-19 pandemic, and promises the foundation for a flourishing future.

Provided we govern and wield technology with appropriate wisdom.

The Existential Risk Observatory, founded in 2021 in the Netherlands, is the latest in a series of global institutions concerned for humanity’s future and with a mission to ensure a thriving global society immune from existential threats.

Driven by optimism for our collective future the Observatory convened a conference on existential risks and invited speakers from around the world.

I had the privilege of presenting my take on biological threats, drawing on research I’d undertaken in conjunction with Nick Wilson of the University of Otago, and others, prior to Covid-19, as well as lessons from New Zealand’s experience with Covid-19, and international research on biological threats.

You can watch my presentation by clicking this link (Session two, talk from 25:10, Q&A from 1:08:55).

Below, I’ve provided the full menu of talks at the conference, which includes:

  • artificial intelligence
  • climate change
  • nuclear weapons
  • biological threats
  • policy approaches

Existential Risk Observatory (Netherlands) Conference on Existential Risks

Session one (7 October 2021)

0:00                 Introduction to the Conference

17:45               Power Hour (general discussions of the conference’s themes)

1:20:45            Climate Change – Ingmar Rentzhog (Founder/CEO We Don’t Have Time)

2:47:34            Existential Risks – Simon Friederich (University of Groningen)

3:48:00            Artificial Intelligence – Roman Yampolskiy (Louisville University)

Session two (8 October 2021)

0:00                 Introduction to Session Two

25:10               Biological Risks – Matt Boyd (Adapt Research Ltd, New Zealand)

1:26:15            Policy – Rumtin Sepasspour (Cambridge Centre of Study for Existential Risk)

2:52:25            Nuclear Weapons – Susi Snyder (PAX, Nobel Peace Laureate)

3:56:29            Artificial Intelligence Policy – Claire Boine (Harvard & Future of Life Institute)

As Rumtin Sepasspour (Research Affiliate, Cambridge University) noted in his presentation, governments are key stakeholders in the quest for immunity from existential risk, particularly those that arise from accidental or deliberate use of technology. Governments should look at existential risks as a set to be analysed, prioritised and mitigated.

In our quest to understand, prevent, prepare and respond to existential threats every country should hold these meetings of diverse stakeholders to share knowledge and ideas for successfully navigating the period where our technological power outstrips our institutional wisdom.

A new report from the Secretary General of the United Nations ‘Our Common Agenda’, calls on nations to develop foresight and futures capability under an umbrella of coordinated global action.

An very good informed summary and discussion of the UN report can be read here.

AI, Employment and Ethics

Screen Shot 2019-09-26 at 12.01.33 AM

In this post I aim to describe some of the ethical issues around the use of algorithms to make or assist decisions in recruitment and for managing gig employment.

When discussing ethics we are trying to deduce the right thing to do (not the permissible thing, or the profitable thing, or the legal thing).

AI in recruitment

Consider Usha, who is a software engineer specialising in machine learning. Let’s imagine for the purposes of example, that she is in fact the most qualified and experienced person in the applicant pool for an advertised position, and would in fact perform the best in the role out of the entire applicant pool. In her application:

  • She uses detectably ‘female’ language in resume
  • She notes she didn’t start coding until the age of 18
  • She was the founding organiser of LGBTQ on campus

She also has a non-Western name and her dark skin tone made it difficult for an AI system to register her affect during a recorded video interview with a chat bot.

Faced with this data, an AI recruitment algorithm screened her out. She doesn’t get the job. She didn’t even get an face to face interview. Given the circumstances, many of us might think this was wrong.

Perhaps it is wrong because some principles such as fairness, or like treatment of like, or equality of opportunity have been transgressed. Overall, an injustice seems to have occurred.

Algorithmic Injustice

In his book Future Politics, Jaime Susskind lays out the various was in which an algorithm could lead to unjust outcomes.

  • Data-based injustice: where problematic, biased or incomplete data leads the algorithm to decide unfairly
  • Rule-based injustice
    • Overt: the algorithm contains explicit rules discriminating against some people, e.g. discriminating against people on the basis of sexual orientation.
    • Implicit: the algorithm discriminates systematically against some kinds of people due to correlations in the data, e.g. discriminating against those who didn’t start learning to code until after the age of 18 might discriminate against women due to social and cultural norms.
  • The neutrality fallacy: equal treatment for all people can propagate entrenched bias and injustice in our institutions and society.

Susskind notes that most algorithmic injustice can be traced back to actions or omissions of people.

Human Rights

Another way of formulating algorithmic ethics is in terms of human rights. In this case, rather than look to the outcome of a process to decide whether it was just or not, we can look to the process itself, and ask whether the applicant’s human rights have been respected.

In a paper titled, “Artificial Intelligence & Human Rights: Opportunities & Risks” The Berkman Klein Centre for Internet and Society at Harvard concludes that the following rights could be transgressed by the use of algorithms for recruitment. The rights to:

  • Freedom from discrimination
  • Privacy
  • Freedom of opinion, expression and information
  • Peaceful assembly and association
  • Desirable work

But it might be that case that ongoing ethical discourse could lead us to new rights in the age of AI, perhaps:

  • The right to not be measured or manipulated?
  • The right to human interaction?

Ethical Systems

The foundations for reasoning about rights transgressions or whether outcomes are just or unjust are found in ethical systems. Such systems have been constructed and debated by philosophers for centuries.

The Institute of Electrical and Electronic Engineers (IEEE) recognises this and in their (300 page!) report ‘Ethically Aligned Design’, they identify and describe a number of Western and non-Western ethical systems that might underpin considerations of algorithmic ethics. The IEEE notes that ethical systems lead us to general principles and general principles define imperatives. The IEEE list and explain eight general principles for ethically aligned design.

One example of an ethical system is what might broadly be considered the ‘consequentialist’ system, which determines right and wrong according to consequences. A popular version of this approach is utilitarianism, the ethical approach that seeks to maximise happiness. As an example, under utilitarianism, affirmative action can be good for society as a whole, enriching the experience of college students and enhancing representation in public institutions and making everyone happier in the end. This approach tends to ensure that we act ‘for the greatest good’.

Another example of an ethical system is deontology or ‘rules based’ ethics. Kantianism is a version of deontology, which argues that ethical imperatives come from within us as human beings, and the right thing to do boils down to ensuring that we treat all people with dignity, ensuring that they are never a mere means to an end but an end in themselves. This approach tends to lead to the formulations of right and duties. For example, it would be wrong to force someone to work without pay (slavery) because this fails to respect their freedom, autonomy, humanity and ultimately dignity, irrespective of the outcomes.

In their report, where the IEEE deduces their general principles of ethically aligned design from a foundation of these ethical systems, the authors note that, “the uncritical use of AI in the workplace, and its impact on employee-employer relations, is of utmost concern due to the high chance of error and biased outcome.”

The IEEE approach is not the only published declaration of ethical principles relevant to algorithmic decision making. The Berkman Klein Institute has catalogued and compared a number of these from public and private institutions.

The Gig Economy

Let’s turn now to gig work. Think of Rita, she is a gig worker working for a hypothetical home cleaning business that operates much like Uber. Rita’s work is monitored by GPS ensuring she takes the most direct route to each job, she’s not sure whether the company tracks her when she’s not working. Only time spent cleaning each house is paid, and the algorithm keeps very tight tabs on her activities. Rita gets warning notifications if she deviates from the prescribed route, such as when she needs to pick her son up from school and drop him to the babysitter. She gets ratings from clients, but one woman, an historical ethnic rival, always rates her low even when she does a good job, the algorithm warns her that it’s her last chance to do better. Rita stresses about the algorithm, feels constantly anxious and enters a depression. She misses work, has no sick pay to draw upon, and spirals downward.

We may conceive of such algorithms as ‘mental whips’ and feel concerned that when acting punitively they may be taking data out of context. Furthermore, the ethically appropriate response from the algorithm to, say, an older worker who falls ill might well be different from that to a wayward youth who slacks off. Justice may not be served by equal treatment.

Phoebe Moore has noted “[such] human resource tool[s] could expose workers to heightened structural, physical and psychosocial risks and stress.” – this is worse if workers feel disempowered.

Surveillance and the Panopticon

Many of the issues around gig management algorithms boil down to issues of surveillance.

Historic surveillance had limitations (e.g. a private detective could only investigate one employee at a time). However, with technological advance we can consider surveillance taken to its purest extreme. This is the situation Jeremy Bentham imagined with his panopticon. A perfect surveillance arrangement where one guard could observe all prisoners in a prison (or workers in a factory for that matter) at all times, without the guard being seen themselves. As soon as workers know this is the situation their behaviour changes. When a machine is surveilling people, people serve the machine, rather than machines serving people.

The panopticon is problematic for a number of reasons. Firstly, there is an unfounded assumption of innate shirking. There may be no right to disconnect (especially if the employer performs 24/7 surveillance of social media).

As with Rita there are risks that surveillance data can be taken out of context. We also know that the greater the surveillance, the greater the human demands for sanctions on apparent transgressions.

Finally, the system lacks a counter measure of ‘equiveillance’, which would allow the working individual to construct their own case from evidence they gather themselves, rather than merely having access to surveillance data that could possibly incriminate them.

Ethically we must ask, who is this situation benefitting? Employment should be a reciprocal arrangement of benefit. But with panopticon-like management of workers, it seems that some interests are held above those of others. Dignity may not be respected and workers can become unhappy. It could be argued that Rita is not being treated as an end in herself, but only as a mere means.

It’s true that Rita chose to work for the platform, and by choosing surveillance, has willingly forgone privacy. But perhaps she shouldn’t be allowed to. This is because privacy has group level benefits. A lack of privacy suppresses critical thought and critical thought is necessary to form alliances and hold those that exploit workers to account.

As a society we are presently making a big deal about consumer privacy, but what about employee privacy and protections? Ethics demands that we examine these discrepancies.

We might want to ensure that humans don’t become a resource for machines, where the power relationship is reversed, where human behavior (like Rita’s) is triggered by machine activity rather than the other way around. The risk is not that robots will take our jobs, we will become the robots living ultra-efficient but dehumanized lives

Ghost Work author Mary Gray says, “[one] problem is that the [algorithmic gig] work conditions don’t recognize how important the person is to that process. It diminishes their work and really creates work conditions that are unsustainable.” This argument contains both consequentialist and deontological points against overzealous algorithmic management of people.

Is there a duty to use algorithms?

I’ve called into question some of the possible uses of algorithms in recruitment and managing the gig economy. Potential injustice seems to lay in wait everywhere in bad data, implicitly unjust rules, and even neutral rules. But when are algorithms justified? What if customer satisfaction really is ‘up 13%’? Is this an argument for preserving the greatest happiness at the expense of a few workers? Or perhaps techniques for ‘ethically aligned design’ could lead to systems that overcome the ‘discriminatory intent’ in people and also enhance justice (dignity) in the process.

“We can tweak data and algorithms until we can remove the bias. We can’t do that with a human being,” – Frida Polli, CEO Pymetrics.

However, the duty to respect human dignity may require some limitations on the functions and capability of AI in recruitment and the management of gig work. We need to examine what limitations.

Australia’s Chief Scientist, Dr Alan Finkel, has proposed the ‘Turing Certificate’, a recognised mark for consumer technologies that would indicate whether the technology adheres to certain ethical standards. This discussion should be ongoing.

Finally, the irony that we implement oversight and regulatory force to combat the use of surveillance and algorithmic force is not lost on me…

Keeping our eye on the laser phish: Information pollution, risk, and global priorities

Untitled

  • This post is on what I consider to be the most pressing problem in the world today.
  • I lay out the theory underpinning information pollution, the significance and trajectory of the problem, and propose solutions (see bullets at the end). 
  • I encourage you to persist in reading this post, so that we can all continue this important conversation. 

Introduction

Technological innovation and growth of a certain kind are good, but I want to explain why risk mitigation should be more of a priority for the world in 2018. Without appropriate risk mitigation, we could upend the cultural, economic and moral progress we have enjoyed over the last half-century and miss out on future benefits.

One particular risk looms large, we must urgently address the threat of information pollution and an ‘infopocalypse’. Cleaning up the information environment will require substantial resources just like mitigation of climate change. The threat of an information catastrophe is more pressing than climate change and has the potential to subvert our response to climate change (and other catastrophic threats).

In a previous post, I responded to the AI Forum NZ’s 2018 research report (see ‘The Good the Bad and the Ugly’). I mentioned Eliezer Yudkowsky’s notion of an AI fire alarm. Yudkowsky was writing about artificial general intelligence, however, it’s now apparent that even with our present rudimentary digital technologies the risks are upon us. ‘A reality-distorting information apocalypse is not only plausible, but close at hand’ (Warzel 2018). The fire alarm is already ringing…

Technology is generally good

Technology has been, on average, very good for humanity. There is almost no doubt that people alive today have lives better than they otherwise would have because of technology. With few exceptions, perhaps including mustard gas, spam and phishing scams, arguably nuclear weapons, and other similar examples, technology has improved our lives.

We live longer healthier lives, are able to communicate with distant friends more easily, and travel to places or consume art we otherwise could not have, all because of technology.

Technological advance has a very good track record, and ought to be encouraged. Economic growth has in part driven this technological progress, and economic growth facilitates improvements in wellbeing by proxy, through technology.

Again, there are important exceptions, for example where there is growth of harmful industries that cause damage through externalities such as pollution, or through products that make lives worse, such as tobacco or certain uses of thalidomide for example.

The Twentieth Century however, with its rapid growth, technological advance, relative peace, and moral progress was probably the greatest period of advance in human wellbeing the world has experienced.

Responsible, sustainable growth is good

The key is to develop technology, whilst avoiding technologies that make lives worse, and to grow while avoiding threats to sustainability and harm to individuals.

Ideally the system should be stable, because the impacts of technology and growth compound and accumulate. If instability causes interruption to the processes, then future value is forgone, and the area under the future wellbeing curve is less than it otherwise would have been.

Economist Tyler Cowen explains at length in his book Stubborn Attachments. Just as opening a superannuation account too late in life can forgo a substantial proportion of potential future wealth, delayed technological development and growth can forgo substantial wellbeing improvements for future people.

Imagine if the Dark Ages had lasted an extra 50 years, we would presently be without the internet, mobile phones, coronary angiography and affordable air travel.

To reiterate, stability of the system underpins the magnitude of future benefit. There are however a number of threats to the stability of the system. These include existential threats (which would eliminate the system) and catastrophic risks (which would set the system back, and so irrevocably forgo future value creation).

Risk mitigation is essential

The existential threats include (but are not limited to): nuclear war (if more than a few hundred warheads are detonated), asteroid strikes, runaway climate change (the hothouse earth scenario), systematic extermination by autonomous weapons run amok, an engineered bioweapon multistrain pandemic, geoengineering experiment gone wrong, and assorted other significant threats.

The merely catastrophic risks include: climate change short of hothouse earth, war or terrorism short of a few hundred nuclear warhead detonations, massive civil unrest, pandemic influenza, system collapse due to digital terror or malfunction, and so on.

There is general consensus that the threat of catastrophic risk is growing, largely because of technological advance (greenhouse gases, CRISPR, power to size warhead improvements, dependence on just in time logistics…). Even a 0.1% risk per year, across ten catastrophic threats, makes one of them almost inevitable this century. We need to make sure the system we are growing is not only robust against risk, but is antifragile, and grows to strengthen in response to less severe perturbations.

Currently it does not.

Although we want to direct many of our resources toward technological growth and development, we also need to invest a substantial portion in ensuring that we do not suffer major setbacks as a result of these foreseeable risks.

We need equal measures of excited innovation and cautious pragmatic patience. We need to get things right, because future value (and even the existence of a future) depends on it.

We must rationally focus and prioritize

There are a range of threats and risks to individuals and society. Large threats and risks can emerge at different times and grow at different rates. Our response to threats and risks needs to be prioritized by the imminence of the threat, and the magnitude of its effects.

It is a constant battle to ensure adequate housing, welfare, healthcare, and education. But these problems, though somewhat important (pressing and a little impactful), and deserving of a decent amount of focus, are relatively trivial compared with the large risks to society and wellbeing.

Climate change is moderately imminent (significant temperature rises over the next decades) and moderately impactful (it will cause severe disruption and loss of life, but it is unlikely to wipe us out). A major asteroid strike is not imminent (assuming we are tracking most of the massive near earth objects), but could be hugely impactful (causing human extinction).

The Infopocalypse

I argue here that the risks associated with emerging information technologies are seriously imminent, and moderately impactful. This means that we ought to deal with them as a higher priority and with at least as much effort as our (woefully inadequate) efforts to mitigate climate change.

To be clear, climate change absolutely must be dealt with in order to maximize future value, and the latest IPCC report is terrifying. If we do not address it with sufficiently radical measures then the ensuing drought, extreme weather, sea level rises, famine, migration, civil unrest, disease, an so on, will undermine the rate of technological development and growth, and we will forgo future value as a result. But the same argument applies to the issue of information pollution. First I will explain some background.

Untitled1

Human information sharing and cognitive bias

Humanity has shown great progress and innovation in storing and packaging information. Since the first cave artist scratched an image on the wall of a cave, we have continued to develop communication and information technologies with greater and greater power. Written symbols solved the problem of accounting in complex agricultural communities, the printing press enabled the dissemination of information; radio, television, the internet, and mobile phones have all provided useful and life enhancing tools.

Humans are a cultural species. This means that we share information and learn things from each other. We also evolve beneficial institutions. Our beliefs, habits and formal routines are selected and honed because they are successful. But the quirks of evolution mean that it is not only ideas and institutions that are good for humanity that arise. We have a tendency for SNAFUs.

We employ a range of different strategies for obtaining relevant and useful information. We can learn information ourselves through a trial and error process, or we can learn it from other people.

Generally, information passed from one generation to the next, parent to child (vertical transmission), is likely to be adaptive information that is useful for navigating the problems the world poses. This is because natural selection has instilled in parents a psychological interest in preparing their children to survive and these same parents, holders of the information, are indeed alive.

Information that we glean from other sources such as our contemporaries (horizontal transmission) does not necessarily track relevant or real problems in the environment, nor necessarily provide us with useful ways to solve these problems. Think of the used car salesperson explaining to you that you really do need that all-leather interior. Think of Trump telling welfare beneficiaries that they’ll be better off without Medicare.

Furthermore, we cannot attend to all the information all the time, and we cannot verify all the information all the time. So we use evolved short cuts, useful heuristics that have obtained for us, throughout history and over evolutionary time, the most useful information there is. Such simple psychological rules as ‘copy talented and prestigious people’, or ‘do what everyone else is doing’, have generally served us well.

Until now…

There are many other nuances to this system of information transmission, such as the role of ‘oblique transmission’ e.g. from teachers, the role of group selection for fitness rather than individual selection, the role of the many other cognitive biases besides the prestige biased information copying and frequency-dependent copying just mentioned. And there is also the appeal of the content of the information itself, does it seem plausible, does it fit with what is already believed, have a highly emotive aspect, or is simple to remember?

The key point is that the large-scale dynamics of information transmission depend on these micro processes of content, source, and frequency assessment (among other processes) at the level of the individual.

All three of these key features can easily be manipulated, at scale, and with personalization, by existing and emerging information technologies.

Our usually well-functioning cognitive short-cuts can be hacked. The advertising and propaganda industries have realized this for a long time, but until now their methods were crude and blunt.

The necessity of true (environmentally tracking) information

An important feature of information transmission is that obtaining information imposes a cost on the individual. This cost can be significant due to the attention required, time and effort spent on trial and error, research, and so forth.

It is much cheaper to harvest information from others rather than obtain it yourself (think content producers vs content consumers). Individuals who merely harvest free information without aiding the production and verification of information are referred to in cultural and information evolution models as ‘freeriders’.

Freeriders do very well when environments are stable, and the information in the population tracks that environment, meaning that the information floating around is useful for succeeding in the environment.

However, when environments change, then strategies need to change. Existing information biases and learning strategies, favoured by evolution because, on average, they obtain good quality information, may no longer track the relevant features of the environment. These existing cognitive tools may no longer get us, on average, good information.

Continuing to use existing methods to obtain or verify information when the game has changed can lead individuals and groups to poor outcomes.

We are seeing this in the world today.

The environment for humanity has been changing rapidly and we now inhabit a world of social media platform communication, connectivity, and techniques for content production, which we are not used to as a species. Our cognitive biases, which guide us to trust particular kinds of information are not always well suited to this new environment, and our education systems are not imbuing our children with the right tools for understanding this novel system.

As such, the information we share is no longer tracking the problem space that it is meant to help us solve.

This is particularly problematic where ‘consume only (and do not verify)’ freeriders are rife, because then those that create content have disproportionate power. Those who create content with malice (defectors) have the upper hand.

The greater the gap between the content in the messages and the survival and wellbeing needs of the content consumers, the greater the risk of large scale harm and suffering across time.

If we don’t have true information, we die.

Maybe not today, maybe not tomorrow, but probabilistically and eventually.

Because fundamentally that is what brains are for, they are for tracking true features of the environment and responding to them in adaptive ways. The whole setup collapses if the input information is systematically false or deceptive and our evolved cognitive biases persist in believing it. The potential problem is immense. (For a fuller discussion see Chapter 2 of my Masters thesis here).

How information technology is a threat: laser phishing and reality apathy

Information has appeal due to its source, frequency or content. So how can current and emerging technological advances concoct a recipe for disaster?

We’ve already seen simple hacks and unsophisticated weaponizing of social media almost certainly influence global events for the worse, such as the US presidential election (Cambridge Analytica), the Brexit vote, the Rohingya genocide, suppression of women’s opinions in the Middle East, and many others. The Oxford Computational Propaganda Project catalogues these.

These simple hacks involve the use of human trolls, and some intelligent systems for information creation, testing and distribution. Common techniques involve bot armies to convey the illusion of frequency, thousands of versions of advertisements with reaction tracking to fine tune content, and the spread of fake news by ‘prestigious’ individuals and organizations. All these methods can be used to manipulate the way information flows through a population.

But this is just the tip of the iceberg.

In the above cases a user who is careful will be skeptical of much of the information presented by carefully comparing the messages received to the reality at large (though this involves effort). However, we are rapidly entering an era where reality at large will be manipulated.

Technology presently exists that can produce authentic sounding human audio, manipulate video seamlessly, remove or add individuals to images and video, create authentic looking video of individuals apparently saying things they did not say, and a wide range of other malicious manipulations.

A mainstay of these techniques is the use of generative adversarial networks (GANs), a kind of artificial intelligence that deploys machine learning to first categorize the world, then produce new content and refine that content until it is indistinguishable from the training dataset, and yet did not exist in the training dataset. Insofar as we believe video, audio and images document reality, GANs are starting to create reality.

Targeting this new ‘reality’ in the right ways can exploit the psychological biases, which we depend on in order to attend to what ought to be the most relevant and important information amid a sea of content.

We are all used to being phished these days. This is where an attempt is made to deceive us into acting in the interests of a hostile entity through the use of (often) an email or social media message. Phishing is often an attempt to obtain personal information, but can manifest as efforts to convince us to purchase products that are not in our interests.

The ‘Nigerian scams’ were some of the first such phishes, but techniques have advanced well beyond ‘Dear esteemed Sir or Madam…’

Convincing us of an alternate reality is the ultimate phish.

Laser phishing is the situation where the target is phished but the phish appears to be an authentic communication from a trusted source. Perhaps your ‘best friend’ messages you on social media, or your ‘boss’ instructs you to do something.

The message reads exactly as the genuine article, along with tone, colloquialisms and typical misspellings you’re used to the individual making. This is because machine learning techniques have profiled the ‘source’ of the phish and present an authentic seeming message. If this technique becomes advanced, scaled endlessly through automation, and frequently deployed, it will be necessary, but simply impossible, to verify the authenticity of every message and communication.

The mere existence of the technique (and others like it) will cast doubt on every piece of information you encounter every day.

But it gets worse.

Since GANs are starting to create authentic seeming video, we can imagine horror scenarios involving video of Putin or Trump or Kim Jong Un declaring war. I won’t dwell too much on these issues here, as I’ve previously posted on trust and authenticity, technology and society, freedom and AI, AI and human rights. Needless to say, things are getting much worse.

Part of the problem lies in the incentives that platform companies have for sustaining user engagement. We know that fake news spreads more widely than truth online. This tends to lead to promotion of sensationalist content and leaves the door wide open for malicious agents to leverage an attentive and psychologically profiled audience. The really big threat is when intelligent targeting (the automated laser phishing above) is combined with dangerous fake content.

These techniques (and many others) have not been widely deployed yet, and by ‘widely’ I mean that most of the digital content we encounter is not yet manipulated. But the productive potential of digital methods and the depth of insight about targets gleaned from shared user data is not bound by human limits.

We depend on the internet and digital content for almost everything we do or believe. Very soon more content will be fake than real. That is not an exaggeration. I’ll say it again, very soon more content will be fake than real. What will we make of that? Without true information we cannot track the world, we cannot progress.

A real possibility is that we come to question everything, even the true and useful things, and become apathetic toward reality by default.

That is the infopocalypse.

Tracking problems in our environment

It is critical for our long-term survival, success and wellbeing, that the information we obtain tracks the true challenges in our environment. If there is a mismatch between what we think is true and what really is true, then we will suffer as individuals and a species (think climate denial vs actual rising temperatures).

If we believe that certain kinds of beliefs and institutions are in our best long-term interests when they are not then we are screwed.

Bad information could lead to maladaptation and potentially to extinction. This is especially true if the processes that are leading us to believe the maladaptive information are impervious to change. There are a number of reasons why this might be so. The processes might be leveraging our cognitive biases, or they may be sustained by powerful automated entities or they may quell our desire for change through apparent reward.

There are many imaginable scenarios where the information we consume is almost all no good for us, it is ‘non-fitness tracking’ yet we lap it up anyway.

We’re seeing this endemically in the US at the moment. The very individuals who stand to lose the most from Trump’s health reform are the most vocal supporters of his policies. The very individuals who stand to gain most from a progressive agenda are sending pipe bombs in the mail.

Civil disorder and outright conflict are only (excuse the pun) a stones throw away.

This is the result of all the dynamics I’ve outlined above. Hard won rights, social progress, and stability are being eroded, and that will mean we forgo future value because if the world taught us anything in the 20th Century it’s that…

… peace is profitable.

If we can’t shake ourselves out of the trajectory we are on, then the trajectory is ‘evolutionarily stable’ to use Dawkins’ term from 1976. And to quote, ‘an evolutionarily stable strategy that leads to extinction… leads to extinction’.

This is not hyberbole, because as noted above, hothouse earth is an extinction possibility, nuclear war is an extinction possibility. If the rhetoric and emerging information manipulation techniques take us down one of these paths then that is our fate.

To reiterate, the threat of an infopocalypse is more pressing and more imminent than the threat of climate change, and we must address it immediately, with substantial resource investment, otherwise malicious content creating defectors will win.

As we advance technologically, we need to show restraint and patience and mitigate risks. This means a little research, policy and action, taken thoughtfully, rather than rushing to the precipice.

The battle against system perturbation and risk is an ongoing one, and many of the existing risks have not yet been satisfactorily mitigated. Nuclear war is a stand out contender for greatest as yet unmitigated threat (see my previous post on how we can keep nuclear weapons but eliminate the existential threat).

Ultimately, a measured approach will result in the greatest area under the future value curve.

So what should we do?

I feel like all the existing theory that I have outlined and linked to above is even more relevant today then when it was first published. I also feel like there are not enough people with broad generalist knowledge in these domains to see the big picture here. The threats are imminent, they are significant, and yet with few exceptions, they remain unseen.

We have the evolution theory, information dynamic theory, cognitive bias theory, and machine learning theory to understand and address these issues right now. But that fight needs resourcing and it needs to be communicated so a wider population understands the risks.

Solutions to this crisis, just like solutions to climate change will be multifaceted.

  • In the first instance we need more awareness that there is a problem. This will involve informing the public, technical debate, writing up horizon scans, and teaching in schools.
  • Children need to grow up with information literacy. I don’t mean just how to interpret a media text, or how to create digital content. I mean they need to learn how to distinguish real from fake, and how information spreads due to the system of psychological heuristics, network structure, frequency and source biases, and the content appeal of certain kinds of information. These are critical skills in a complex information environment and we have not yet evolved defenses against the current threats.
  • We need to harness metadata and network patterns to automatically watermark content and develop a ‘healthy content’ labelling system akin to healthy food labels, to inform consumers of how and why pieces of information have spread. We need to teach this labelling system widely. We need to fix fake news. (I’ve recently submitted a proposal to a philanthropically funded competition for research funding to contribute to exactly that project. And I have more ideas if others out there can help fund the research)
  • We need mechanisms, such as blockchain identity verification to subvert laser phishing.
  • We need to outlaw the impersonation of humans in text, image, audio or video.
  • We need to be vigilant to the technology of deception.
  • We need to consider the sources of our information and fight back with facts.
  • We need to reject the information polluting politics of populism.
  • We need to invest in cryptographic verification of images and audio.
  • We need to respect human rights whenever we deploy digital content.
  • We also need a local NZ summit on information pollution and reality apathy.

Summary

More needs to be done to ensure that activity at a local and global level is targeted rationally towards the most important issues and the most destabilizing risks. This means a rational calculus of the likely impact of various threats to society and the resources required for mitigation.

Looking at the kinds of issues that today ‘make the front page’ shows that this is clearly not happening at present (‘the Koru Club was full’ – I mean seriously!). And ironically the reasons for this are the very dynamics and processes of information appeal, dissemination and uptake that I’ve outlined above.

A significant amount is known about cultural informational ‘microevolutionary’ processes (both psychological and network-mediated) and it’s time we put this theory to work to solve our looming infopocalypse.

I am more than happy to speak pro bono or as a guest lecturer on these issues of catastrophic risk, the threat of digital content, or information evolution and cognitive biases.

If any organizations, think tanks, policy centers, or businesses wish to know more then please get in touch.

I am working on academic papers about digital content threat, catastrophic risk mitigation, and so on. However, the information threat is emerging faster than it can be published on.

Please fill out my contact form to get in touch.

 

Selected Further Reading:

Author’s related shorter blogs:

The problem of Trust and Authenticity

Technology and society

AI and human rights

AI Freedom and Democracy

Accessible journalism:

Helbing (2017) Will democracy survive big data and artificial intelligence

Warzel (2018) Fake news and an information apocalypse

Academic research:

Mesoudi (2017) Prospects for a science of cultural evolution

Creanza (2017) How culture evolves and why it matters

Acerbi (2016) A cultural evolution approach to digital media

Author’s article on AI Policy (2017) Rapid developments in AI

Author’s article on memetics (2008) The case for memes

Author’s MA thesis (2008) on Human Culture and Cognition

AI content targeting may violate human rights

Image result for human rights and computers

Does AI driven micro targeting of digital content violate human rights? The UN says ‘yes!’

Last month the United Nations published a document on AI and human rights with a particular focus on automated content distribution. The report focuses on the rights to freedom of opinion and expression, which are often excluded from public and political debates on artificial intelligence.

The overall argument is that an ethical approach to AI development, particularly in the area of content distribution, is not a replacement for respecting human rights.

Automation can be a positive thing, especially in cases where it can remove human operator bias. However, automation can be negative if it impedes the transparency and scrutability of a process.

AI dissemination of digital content

The report outlines the ways in which content platforms moderate and target content and how opaque AI systems could interfere with individual autonomy and agency.

Artificial intelligence is proving problematic in the way it is deployed to assess content and prioritize which content is shown to which users.

“Artificial intelligence evaluation of data may identify correlations but not necessarily causation, which may lead to biased and faulty outcomes that are difficult to scrutinize.”

Without ongoing supervision, AI systems may “identify patterns and develop conclusions unforeseen by the humans who programmed or tasked them.”

Browsing histories, user demographics, semantic and sentiment analyses and numerous other factors, are used to determine which content is presented to whom. Paid content often supplants unpaid content. The rationale behind these decisions is often opaque to users and often to platforms too.

Additionally, AI applications supporting digital searches massively influence the dissemination of knowledge and this personalization can minimize exposure to diverse views. Biases are reinforced and inflammatory content or disinformation is promoted as the system measures success by online engagement. The area of AI and human values alignment is begging for critical research and is discussed in depth by AI safety researcher Paul Christiano elsewhere.

From the point of view of human autonomy these systems can interfere with individual agency to seek and share ideas and opinions across ideological, political or societal divisions, undermining individual choice to find certain kinds of information.

This is especially so because algorithms typically will deprioritize content with lower levels of engagement (e.g. minority content). Also, the systems are often hijacked via bots, metadata hacks, and possibly by adversarial content.

Not only is much content obscured from many users, but otherwise well-functioning AI systems can be tripped up by small manipulations to the input. Without a ‘second look’ at the context (as our hierarchically structured human brain does when something seems amiss) AI can be fooled by ‘adversarial content’.

For example, in the images below the AI identifies the left picture as ‘fox’ and the slightly altered right picture as ‘puffer fish’. An equally striking example is the elephant vs sofa error , which is clearly due to a shift in context.

Screen Shot 2018-10-20 at 4.19.06 pm

Cultural context is particularly good at tripping up AI systems and results in content being removed due to biased or discriminatory concepts. For example, the DeepText AI identified “Mexican” as a slur due to the context of its use in textual analysis. Such content removal is another way that AI can interfere with user autonomy.

There is an argument that individuals should be exposed to parity and diversity in political messaging, but micro targeting of content is creating a “curated worldview inhospitable to pluralistic political discourse.”

Overall, AI targeting of content incentivizes broad collection of personal data and increases the risk of manipulation through disinformation. Targeting can exclude whole classes of users from information or opportunities.

So what should we be doing about all this? The UN report offers a vision for a human rights based approach to AI and content distribution.

A Human Rights Legal Framework for AI

The UN report outlines the scope of human rights obligations in the context of artificial intelligence and concludes that:

“AI must be deployed so as to be consistent with the obligations of States and the responsibilities of private actors under international human rights law. Human rights law imposes on States both negative obligations to refrain from implementing measures that interfere with the exercise of freedom of opinion and expression and positive obligations to promote rights to freedom of opinion and expression and to protect their exercise.”

What does this mean?

All people have the right to freedom of opinion without interference.

(this is guaranteed by article 19 (1) of the International Covenant on Civil and Political Rights and article 19 of the Universal Declaration of Human Rights)

Basically, undue coercion cannot be employed to manipulate an individual’s beliefs, ideologies, reactions and positions. We need to have a public discussion about the limits of coercion or inducement, and what might be considered interference with the right to form an opinion.

The reason that this is a novel issue is because AI curation of online content is now micro targeting information “at a scale beyond the reach of traditional media.” Our present norms (based on historical technologies) may not be up to the task of adjudicating on novel techniques.

The UN report argues that companies should:

“at the very least, provide meaningful information about how they develop and implement criteria for curating and personalizing content on their platforms, including policies and processes for detecting social, cultural or political biases in the design and development of relevant artificial intelligence systems.”

The right to freedom of expression may also be impinged by AI curation. We’ve seen how automated content takedown may run afoul of context idiosyncrasies. This can result in the systematic silencing of individuals or groups.

The UN Human Rights Committee has also found that States should “take appropriate action … to prevent undue media dominance or concentration by privately controlled media groups in monopolistic situations that may be harmful to a diversity of sources and views.”

Given these problems, more needs to be done to help users understand what they are presented with. There are some token gestures toward selectively identifying some sponsored content, but users need to be presented with relevant metadata, sources, and the alternatives to the content that they are algorithmically fed. Transparency, for example, about confidence measures, known failure scenarios and appropriate limitations on use would be of great use.

We all have a right to privacy, yet AI systems are presently used to infer private facts about us, which we may otherwise decline to disclose. Information such as sexual orientation, family relationships, religious views, health conditions or political affiliation can be inferred from network activity and even if not explicitly stated these inferences can be represented implicitly in neural nets and drive content algorithms.

These features of AI systems could violate the obligation of non-discrimination.

Finally, human rights law guarantees individuals whose rights are infringed, a remedy determined by competent judicial, administrative or legislative authorities. Remedies “must be known by and accessible to anyone who has had their rights violated,” but the logic behind an algorithmic decision may not be evident even to an expert trained in the underlying mechanics of the system.

Solutions & Standards

We need a set of substantive standards for AI systems. This must apply to companies and to States.

Companies need professional standards for AI engineers, which translate human rights responsibilities into guidance for technical design. Codes of ethics (such as those now adopted by most of the major AI companies) may be important but are not a substitute for recognition of human rights.

Human rights law is the correct framework within which we must judge the performance of AI content delivery systems.

Companies and governments need to embrace transparency and simple explanations about the functioning of systems will go a long way to contribute to public discourse, education and debate on this issue.

The UN report also recommends processes for artificial intelligence systems that include: human rights impact assessments, audits, a respect for personal autonomy, notice and consent processes, and remedy for adverse impacts.

The report concludes with recommendations for States and for Companies, which includes the recommendation that, “companies should make all artificial intelligence code fully auditable.”

All this sounds very sensible and is a conservative approach to what could rapidly become an out of control problem of information pollution.

If anyone is interested in my further thoughts on “AI, Freedom and Democracy”, you can listen to my talk at the NZ Philosophy Conference 2017 here.

%d bloggers like this: