COVID-19 and Google mobility data in New Zealand and other countries

Following a recent post on the University of Otago’s Public Health Expert blog, Matt from Adapt Research discusses COVID-19 with Radio New Zealand’s ‘The Panel’

Read the Blog here: https://blogs.otago.ac.nz/pubhealthexpert/2020/04/12/changes-in-mobility-in-response-to-the-covid-19-pandemic-nz-vs-other-countries-and-the-stories-it-suggests/

Listen to the discussion here (6min): https://www.rnz.co.nz/national/programmes/thepanel/audio/2018742758/google-data-and-what-it-says-about-nz-s-lockdown

Alert Level 4 will minimise long-term pain

The only way to contain coronavirus cases in NZ is to go to Alert Level 4 for a brief period.

New Zealand currently has an unknown number of coronavirus cases wandering around our community. A number of observations support this, first is the experience of many other countries, second is the sudden growth in confirmed cases, third is that we have a cohort of international arrivals (and returning New Zealanders) who came to NZ between March 5-15th. This means they arrived before the mandatory 14-day quarantine, but still within the virus incubation period. Virtually all our new cases are from this cohort.

We now have a 4-level alert system. This is an excellent tool to get everyone on the same page and clearly to communicate both risk and required actions.

We should be at Level 4 right now.

This would allow us to reduce to Level 3 and then Level 2 as quickly as possible and avoid protracted disruption. I will explain.

The target we are chasing is invisible and our confirmed data is always behind the brute facts of the situation. In a situation like this the ONLY strategy that can succeed is to draw the circle of containment as wide as possible and then move inwards. This means setting Alert Level 4 right now, stay at home.

Over the next 12 or so days, ALL the coronavirus cases would then reveal themselves with symptoms developing (or if it is a mild case the infection passing almost unnoticed) without infecting anyone else. Once we have identified ALL the cases, then they go into isolation and we have stopped the spread.

After two weeks, towns, cities or regions with no cases revealed can be set back to Level 3 and even Level 2 and business can carry on. Towns or regions with cases might hold Level 4 for another week. But everyone can very quickly get back to Level 2.

We can then release the strict controls and just deal with one case at a time as they crop up, all of which (theoretically by that point) will be cases of returning New Zealanders who will reveal their symptoms while in 14-day self-quarantine and this will not be problematic.

We then implement widespread temperature checks, hand washing, stigmatization of coughing etc. But business can carry on.

What this implies

Whether Winston Peters’ encouraging of New Zealanders overseas to return is helping or facilitating the problem. Places like Wuhan now look relatively safe. As long as people don’t transit through airports where they could catch the virus, there are many places that might be safer than attempting to return, and the likelihood of bringing disease back into a quarantine zone.

In the future when we get warning of a novel outbreak we ought to immediately and dramatically close down the world for 2-3 weeks for exactly the reasons that I’ve outlined above. This should have happened around Jan 20th. Previous times that we might have taken this approach would have been with SARS, Ebola, MERS, and now Coronavirus. This is about 4 times per 20 years, or about 20 times per century. It is far preferable to suffer 2 weeks of GDP losses twenty times than 18 months once. This is a no brainer, but the mechanisms need to be coordinated ahead of time. A 20:1 false positive rate needs to be seen as acceptable.

Hopefully, New Zealand decision makers see the logic in this and raise us to Level 4 today, for a period of 2 weeks to minimise our losses.

In the face of catastrophe: The rationale for border closure and other significant steps

New Zealand has just shut its borders to foreign visitors for the first time in history in an attempt to mitigate the impact of COVID-19. In what follows, I outline the case for border closure, the conditions under which it is rational, and then note that this is merely one measure against one catastrophe. Moving forward, we should get used to analysing and preparing responses to such large-scale risks.

Closing the border in a pandemic

Pandemics impose costs on society. These costs include up front hospital costs, ICU costs, lost productivity due to illness, and death, and downstream costs including human and emotional toll and long-term productivity loss.

Closing the border also imposes costs, these include disruption, visitor revenue forgone and more than likely reductions in trade and business downturn. But importantly, closing the border can also fail, if a significant amount of disease gets into the country anyway (eg from returning kiwis), and if there is a substantial outbreak despite border closure then the costs of closure and the costs of the pandemic are incurred.

Keep the border open in a pandemic or keep the border closed, it’s a lose-lose situation. However, previous research has demonstrated that provided a pandemic is serious enough, meaning that a sufficiently large proportion of the population are likely to get sick (say, 40%) and the disease has a sufficiently high case fatality (say, very much worse than Spanish flu of 1918) then the costs of border closure pale in comparison to the costs of the pandemic. Border closure is then a no brainer.

Border ‘filtering’ and border ‘closure’

But there is border closure and then there is border closure. The current New Zealand policy, let’s call it ‘filtering at the border’, allows for the free passage of returning New Zealanders, provided they agree to pass through the ‘airlock’ of 14 day quarantine, and also allows for New Zealanders to leave and then return again (although this is advised against). Both processes could bring disease into the country. A policy of ‘complete’ as opposed to ‘filtered’ border closure would prohibit the movement of kiwis too.

Would ‘complete’ border closure ever be justified. Possibly yes. Imagine if the case fatality rate of COVID-19 was 100% instead of 1%. There could really be no justification for risking anyone entering the country at all until the pandemic was over. If such a case ever arose, then immediately sealing the border and becoming a ‘refuge’ is a completely rational approach.

But back to ‘filtered’ closure. The hope is that by filtering out foreign arrivals, then the impact of COVID-19 will be much less. This could still prove fruitless, either if there is established community transmission from the cases already here, or if returning kiwis bring the virus home and spread it. In our previous research we modelled a scenario where border closure is attempted but fails, resulting in 90% of the unmitigated case-load. For a pandemic of more than 10 times the severity of Spanish flu, this was still a cost-effective measure (minimizing losses) under some assumptions. However, for swine flu (2009) it was never cost-effective to close the border.

How bad is COVID-19?

COVID-19 is an intermediate case. Under the ‘base case’ model of 40% infected, half being asymptomatic and 1% case fatality for the rest, it could cause 1 million symptomatic cases, and 10,000 or more deaths in New Zealand. This is a huge human, social and economic burden with ramifications for our economy for many years. But these costs (the economic ones) only barely balance out the lost visitor revenue from say 6 months of border closure. So, at first glance the case for closure is equivocal.

If trade continues, though curtailed by a policy of border closure, then the numbers become important. If closing the border (successfully) means the pandemic costs you $11 billion less than it would have otherwise (which is the result of modelling 72% infected and 2.3% dying, hospital costs, productivity loss, death, etc), then it is worth closing the border even if there is going to be an $11 billion drop in trade over the period of border closure (about 20% of trade for a six month period).

What is hard to predict, but is becoming clearer, is the global impact of the pandemic on travel and trade. If tourism drops to zero despite our actions, and trade plummets too, then the economic benefits of closure compared to no closure, suddenly appear much greater. You lose nothing by closing the border if there is no tourism and no trade anyway.

These are the numbers that need to be evaluated for a range of scenarios. And on the basis of these numbers COVID-19 looks to be right in the middle, between swine flu and disaster.

This means the decision may come down to factors other than strict economic ones and three factors play a key role: equity, access to care, and the value of a ‘quality adjusted life year’ (basically one year of good health).

Equity, access and value of life

COVID-19 differentially impacts the elderly. They suffer more severe illness on average and bear the brunt of deaths. This may also be true of other minority groups in New Zealand, we don’t know this yet. Our health system places a lot of weight on equitable treatment and prioritising ventilators is not equity. The best strategy to make sure everyone has a fair chance might be to simply close the border and keep it out.

COVID-19 places an immense burden on critical care facilities, and our modelling suggests that with an infinite supply of ICU beds COVID patients could occupy enough of them to cost the health system $1.9 billion or more. However, we only have enough beds that $270 million worth could ever be continuously in use over a six-month period. So, the analysis suggests costs well in excess of those that would be realised (ie the outbreak seems to get cheaper). But this results in unmet health need. And likely would result in preventable deaths. Which have a long-term cost themselves. Again, these are the factors on the scales that need to be weighed.

The modelling research we did used Treasury figures for the value of a year of good health, which are derived from Pharmac funding decisions. Basically, how much does Pharmac spend on medicines that lead to a gain in one year of good healthy life?

It may be that we New Zealanders decide to put much more value on healthy life than this (double? triple? it can’t be infinite or there would never be such a thing as a health budget, it would be a blank cheque). But this new valuation, if imputed in the models, can change the outcome of these economic analyses.

So, closing the border to protect vulnerable populations, to ensure fair access to care, or to save lives can be a very reasonable decision, even when it is not strictly an economically rational one.

Border closure can still fail

Bear in mind that border closure could still fail and the probability of failure should be built into economic modelling. We could still end up with just as many cases as in the no closure scenario. Many other variables in this equation (such as the exact epidemiological parameters of COVID-19 in certain contexts) also remain unknown.

Given the high degree of uncertainty, it may be rational to close the border to assess the situation. This is because, although borders can be re-opened at a later date (even re-opened almost immediately), we may lose the window of opportunity to close them if we don’t act now. This decision and the precise timing are very difficult, and although detrimental economically, it may be wise to move fast, as the government has in fact done.

But now we risk a very long waiting game. The virus is likely to be controlled in some regions of the world and not in others, resulting in a patchwork of ‘hot’ and ‘cold’ zones and severe travel disruption until a vaccine is available. There is almost a paradox in being too successful. If we keep it out then we must sustain our vigilance for the long haul or risk an outbreak every bit as bad as it can be, just down the track.

Global Catastrophic Risks

COVID-19 is a very harmful event, but it is not a global catastrophic threat. There are worse scenarios waiting in the shadows, and we must be prepared for them. Our experience with COVID-19 and the substantial fall-out will hopefully now prime decision makers to be receptive to the idea of building resilience against catastrophic risk. There are far more devastating biological threats that could arise, whether naturally or through biological manipulation, there are threats from other technologies, such as artificial intelligence (AI), geo-engineering, nuclear technologies, as well as threats from climate change, supervolcanoes, near earth objects, and others. COVID-19 has shown us how fragile our just-in-time systems are, how pursuit of a point of extra return can leave us undercapitalised, without cash flow, without inventory, at the mercy of global forces.

The government ought to take a risk portfolio approach to global catastrophic threats, and invest in assessing the probability of various risks, their magnitude and how to build resilience against them.

There is no reason why we can’t have an EQC-style fund to protect against risks that hit only once per generation, but hit hard. There is no reason why we couldn’t have walked through scenarios like COVID-19 with all sectors collaborating to identify the bottlenecks such as ventilator availability, testing locations, health workforce, data systems to track those in quarantine and so on.

The current event involves a biological virus, what if the next event involves a digital threat, would we close our internet borders? What is the case for doing so or for not doing so? This all needs to be scenario-ized.

In fact, in a paper published last week, we made this call with respect to AI, and urged the government to address global catastrophic risk in a systematic and pragmatic fashion.

The first significant step is to institutionalise the learnings from COVID-19, the phrase ‘global catastrophic risk’, and a commitment to undertake research, analysis, planning and systems testing much more often and more robustly.

Are Humans ‘Human Compatible’?

Human Compatible

I’ve just spent the last three days reading Stuart Russell’s new book on AI safety, ‘Human Compatible’. To be fair I didn’t read continuously for three days, this is because the book rewards thoughtful pauses to walk or drink coffee, because it nurtures reflection about what really matters.

You see, Russell has written a book about AI for social scientists, that is also a book about social science for AI engineers, while at the same time providing the conceptual framework to bring us all ‘provably beneficial AI’.

‘Human Compatible’ is necessarily a whistle-stop tour of very diverse but interdependent thinking across computer science, philosophy and the social sciences and I am recommending that all AI practitioners, technology policymakers, and social scientists read it.

The problem

The key elements of the book are as follows:

  • No matter how defensive some AI practitioners get, we need to all agree there are risks inherent in the development of systems that will outperform us
  • Chief among these risks is the concern that AI systems will achieve exactly the goals that we set them, even if in some cases we’d prefer if they hadn’t
  • Human preferences are complex, contextual, and change over time
  • Given the foregoing, we must avoid putting goals ‘in the machine’, but rather build systems that consult us appropriately about our preferences.

Russell argues the case for all these points. The argument is informed by an impressive and important array of findings from philosophy, psychology, behavioural economics, and game theory, among other disciplines.

A key problem as Russell sees it, is that most present day technology optimizes a ‘fixed externally supplied objective’, but this raises issues of safety if the objective is not fully specified (which it can never be), and if the system is not easily reset (which is plausible for a range of AI systems).

The solution

Russell’s solution is that ‘provably beneficial AI’ will be engineered according to three guidelines:

  1. The machine’s only objective is to maximize the realization of human preferences
  2. The machine is initially uncertain about what those preferences are
  3. The ultimate source of information about human preferences is human behaviour

There are some mechanics that can be deployed to achieve such design. These include game theory, utilitarian ethics, and an understanding of human psychology. Machines must defer to humans regularly, ask permission, and their programming will explicitly allow for the machines to be wrong and therefore be open to being switched off.

Agree with Russell or disagree, he has provided a framework to which disparate parties can now refer, a common language and usable concepts accessible to those from all disciplines to progress the AI safety dialogue.

If you think that goals should be hard-coded, then you must point out why Russell’s warnings about fixed goals are mistaken. If you think that human preferences can always be predicted, then you must explain why centuries of social science research is flawed. And be aware that Russell preempts many of the inadequate slogan-like responses to these concerns.

I found an interesting passage late in the book where the argument is briefly extended from machines to political systems. We vote every few years on a government (expressing our preferences). Yet the government then acts unilaterally (according to its goals) until the next election. Russell is disparaging of this process whereby ‘one byte of information’ is contributed by each person every few years. One can infer that he may also disapprove of the algorithms of large corporate entities with perhaps 2 billion users acting autonomously on the basis of ‘one byte’ of agreement with blanket terms and conditions.

Truly ‘human compatible’ AI will ask us regularly what we want, and then provide that to us, checking to make sure it has it right. It will not dish up solutions to satisfy a ‘goal in the machine’ which may not align with current human interests.

What do we want to want?

The book makes me think that we need to be aware that machines will be capable of changing our preferences (we already experience this with advertising) and indeed machines may do so in order to more easily satisfy the ‘goals in the machine’ (think of online engagement and recommendation engines). It seems that we (thanks to machines) are now capable of shaping our environment (digital or otherwise) in such a way that we can shape the preferences of people. Ought this be allowed?

We must be aware of this risk. If you prefer A to B, and are made to prefer B, then how is this permitted? As Russell notes, would it ever make sense for someone to choose to switch from preferring A to preferring B, given that they currently prefer A?

This point actually runs very deep and a lot more philosophical thought needs to be deployed here. If we can build machines that can get us what we want, but we can also build machines that can change what we want, then we need to figure out an answer to the following deeply thought-provoking question, posed by Yuval Noah Harari at the end of his book ‘Sapiens’: ‘What do we want to want?’ There is no dismissive slogan answer to this problem.

What ought intelligence be for?

In the present context we are using ‘intelligence’ to refer to the operation of machines, but in a mid-2018 blog I posed the question what ought intelligence be used for? The point being that we are now debating how we ought to deploy AI, but what uses of other kinds of intelligence are permissible?

The process of developing and confronting an intelligence other than our own is cause for some self-reflexive thought. If there are certain features and uses of an artificial intelligence that we wouldn’t permit, then how are we justified in permitting similar goals and methods of humans? If Russell’s claims that we should want altruistic AI have any force, then why do we permit non-altruistic human behaviour?

Are humans ‘human compatible’?

I put down this book agreeing that we need to control AI (and indeed we can, according to Russell, with good engineering). But if intelligence is intelligence is intelligence then must we necessarily turn to humans, and constrain them in the same way so that humans don’t pursue ‘goals inside the human’ that are significantly at odds with ‘our’ preferences?

The key here is defining ‘our’. Whose preferences matter? There is a deep and complex history of moral and political philosophy addressing this question, and AI developers would do well to familiarise themselves with key aspects of it. As would corporations, as would policymakers. Intelligence has for too long been used poorly.

Russell notes that many AI practitioners strongly resist regulation and may feel threatened when non-technical influences encroach on ‘their’ domain. But the deep questions above, coupled with the risks inherent due to ‘goals in the machine’, require an informed and collaborative approach to beneficial AI development. Russell is an accomplished AI practitioner speaking on behalf of philosophers to AI scientists, but hopefully this book will speak to everyone.

Much work ahead to complete New Zealand’s pandemic preparedness

The Global Health Security Index which considers pandemic threats has just been published. Unfortunately NZ scores approximately half marks (54/100), coming in 35th in the world rankings – far behind Australia. This poor result suggests that the NZ Government needs to act promptly to upgrade the country’s defences against pandemic threats…

Blog hosted elsewhere: click here to read more

The promise of AI in healthcare

AI has likely applications across every domain of healthcare

The AI Forum of New Zealand has just published a report on AI and Health in the New Zealand context. The report trumpets some of the potential cost savings and efficiencies that AI will no doubt bring to the sector over the next few years. However, there are other interesting findings in the research report worth highlighting.

Enhanced teamwork and patient safety

AI that employs a combination of voice recognition, and natural language processing could help monitor and interpret healthcare multidisciplinary team interactions and help to ensure that all information relevant to a discussion has been raised or acknowledged.

This is important because we know that there are many failures in healthcare teamwork and communication, which often have their root cause in failures of information sharing and barriers to speaking up in a hierarchical team setting. This can and does impact patient safety. Including an AI assistant in future health teams could help overcome barriers to speaking up and sharing information.

Overcoming fallible human psychology

We also know that a range of psychological biases are at work when healthcare staff make decisions. These biases include a tendency to be influenced by recent experience (rather than statistical likelihood) and the tendency to attend to information that confirms the beliefs already held. Furthermore, doctors and other clinicians do not often get feedback on their diagnoses. This can lead to a tendency to increase confidence with experience without a parallel increase in accuracy.

One key promise of medical AI is that it can fill in the gaps in clinical thinking by providing a list of potential diagnoses or management plans with a statistical likelihood that each is correct. This kind of clinical decision support system could overcome one key failure in diagnosis, which is that the correct diagnosis is often not even considered.

However, in order to embrace these tools clinicians will also need to understand their own fallibility, the psychology of decision making and the very human cognitive processes that underpin these shortcomings. Intelligent digital systems undoubtedly have their own shortcomings, and their intelligence is best suited to particular kinds of problem. Human psychology suffers from complementary blind spots and it is the combination of artificial and biological intelligence that will advance healthcare.

Ensuring safe data

Another issue discussed in the AI Forum’s report is the need to make health data available in a form that AI can consume without worrying breaches of privacy. There is a significant challenge facing developers to find ways to de-identify free text (and other) data and present it in a form that is both machine readable and also unable to be re-identified whether intentionally or accidentally.

There is a risk that identifiable data could be used, for example, to prejudice insurance premiums on the basis of factors that people do not have control over. The process of de-identifying is proving to be very difficult. For example, even with names and addresses and such identifying features removed from clinical records (itself a challenging task given the many ways that such information can be recorded) there is still the possibility that merging datasets such as mobile phone location data held by, say Google, with clinical records that record the day and time of appointments, could intentionally or inadvertently identify individuals. Issues such as this need to be solved as we move forward with AI for healthcare.

We now need cost-utility analyses

The next step is to catalogue the AI tools that we presently have available and begin assessing the potential impact of these systems. Funders and provider institutions need to conduct cost-effectiveness analyses on these new tools and prioritise those that both increase effectiveness and clinical safety while also reducing time and saving costs. These investments might well take priority over investments in expensive new pharmaceuticals that marginally improve outcomes at great additional expense.

There is likely to be a lot of low hanging fruit in routine, repetitive teamwork and diagnostic tasks that AI is suited to assist with and where the public health dollar will go a long way, benefitting many patients not just a few.

AI offers many promising applications in the health setting and all those involved in the sector would be advised to read reports such as the AI Forum of NZ’s report on AI and Health in NZ, and think creatively about how AI might help solve the grand challenges in healthcare in the 21st Century.

Pandemic Catastrophe: ‘Lifeboat’ is the wrong metaphor

Lifeboat

I recently published an academic paper about island refuges in extreme pandemics with my co-author Nick Wilson. The paper has become the focus of media attention, including articles by Newsweek, Fox News and IFLScience.

Unfortunately, the key argument of the paper has been misconstrued in several reports. The headlines included click bait such as, ‘If A Pandemic Hits, This Is Where Humanity Should Go To Survive’ and ‘Scientists rank safest spots to flee to if apocalyptic outbreak begins to wipe out humanity’.

Our Conclusion

The conclusion of our argument was almost the opposite. We argued that preparations could be made ahead of the fact, so that the designated island refuge(s) could be closed off almost immediately when signals indicate a catastrophic pandemic is emerging. There would be no fleeing to the ‘lifeboat’ allowed.

In fact, the metaphor of a ‘lifeboat’ is misleading, because people scramble for a lifeboat when the disaster strikes. Our argument (which I explain below for those who have not read our paper) is that the islands most likely to be able to reboot a flourishing technological human culture after the event, should be identified ahead of time, and plans enacted to prevent people arriving when the catastrophe strikes (i.e. through border closure measures).

Information Hazard

In the literature on catastrophic and existential risk mitigation, the concept of an ‘information hazard’ is the idea that some information when spread actually increases the risk of a catastrophic outcome. The ‘run to the island refuge’ approach is an information hazard. If people actually behaved like this, it would undermine the effectiveness of the refuge, and increase the probability of (in this case) human extinction.

Our argument is about preserving technological culture, ensuring the continuation of the ‘human project’ and the flourishing of future generations. It is not about saving people around the world at the time of the catastrophe.

The fact that any particular people might survive the event is incidental to the bigger picture that some people could survive and ensure a future full of human beings and full of technological culture.

Global Representation

We identified three promising options for such an island refuge to preserve humanity. And one might argue that it is not fair on the diverse cultures of the world if just Iceland or just Australia, for example, survive the catastrophe.

However, there is nothing preventing plans for a designated refuge to include representation from all the worlds people, perhaps a system of rotating work visas, so that the refuge can host, at any given time, a sample of the members of each of the world’s jurisdictions. Whoever happens to be there at the time of the catastrophe would then effectively get ‘locked in’ and would represent their culture as the world is re-booted.

Providing such visas and hosting these diverse people could be an undertaking that the designated refuge nation(s) take on, perhaps in lieu of providing development assistance to the rest of the world, so that the refuge nation is not unfairly burdened should the refuge never be needed (which is also a likely outcome).

Our paper is only the first step in a conversation about how the world might mitigate the risk from extreme pandemics, and we encourage other ideas so that we can little by little implement strategies that reduce the probability such unlikely but devastating events can destroy humanity.

The TLDR version of our paper (570 words):

The risk of human extinction is probably rising. This is because, although the background risk from such natural events as supervolcanic eruption, or asteroid strikes has remained consistent, the risk from technological catastrophes rises with our technological prowess (think nuclear, nano, bio, and AI technologies) which is by definition unprecedented in human history.

There are a number of reasons to work to preserve the future existence of humanity. These include the value of the vast number of future human lives, the worth of continuing key cultural, scientific and artistic projects that span long periods of time, and the cost-effectiveness of mitigating disaster.

We assume that the continuation of large and advanced civilizations is more valuable than the persistence of only a handful of subsistence societies.

A pandemic or perhaps more probably multiple pandemics occurring together poses a risk to the existence of humanity. The WHO hypothesizes about ‘Disease X’, a disease about which we yet know nothing, and which could emerge from increasingly accessible biotechnological manipulations. Such an event(s) would fall under Nick Bostrom’s concept of ‘moderately easy bio doom’, where a small team of researchers in a small laboratory, working for a year or so, might engineer a catastrophic biological threat. This would only need to occur once to threaten the entire human species.

The probability of such an event is unknown but is likely to be non-zero. A survey of experts in 2008 concluded on the basis of expert opinion that an existential threat level pandemic has a 2% chance of occurring by the year 2100. Two percent times a 10 billion population, means we are looking at an expected human life loss of 200 million people. Reducing the probability of such an event even by half a percent would be hugely worthwhile, particularly if it can be done with existing resources, and within existing governance structures.

One way to mitigate the risk of a pandemic killing all humans, is to designate some location as a refuge (just as with nuclear fall-out shelters in the cold war), which is quarantined (i.e. no one goes in or out) as soon as the triggering event is observed. Land borders are easily crossed by those infected with a disease, but islands have escaped pandemics in the past.

Key features required by such an island refuge are: a sufficient population, skills, and resources to preserve technological culture and rebuild society, and self-sufficiency and resilience to ride out the crisis.

We set about trying to identify the most likely locations (at the level of island nations) that satisfy these criteria. To do this we formalized a 9-point scoring tool covering: population size, number of visitors, distance from nearest landmass, risk of natural hazards, infrastructure and resources (using GDP per capita as a proxy), energy and food self-sufficiency, political stability and social capital.

When scored on our 0-1 metric, Australia, then New Zealand, then Iceland appear to be the most promising island refuges.

A lot of work yet remains to be done. Perhaps there are better interventions to protect the world from an existential level pandemic? Perhaps some other factor(s) is/are critically important to the success of an island refuge that we have not yet included (strong military, the presence of universities or server farms for the world’s knowledge, or some other factor)? How long could the optimal islands hold out? Is there some threshold of population that is essential to ensure that key skills such as nuclear physics or neurosurgery are preserved? We now encourage discussion and debate around these (and many other) issues.

AI, Employment and Ethics

Screen Shot 2019-09-26 at 12.01.33 AM

In this post I aim to describe some of the ethical issues around the use of algorithms to make or assist decisions in recruitment and for managing gig employment.

When discussing ethics we are trying to deduce the right thing to do (not the permissible thing, or the profitable thing, or the legal thing).

AI in recruitment

Consider Usha, who is a software engineer specialising in machine learning. Let’s imagine for the purposes of example, that she is in fact the most qualified and experienced person in the applicant pool for an advertised position, and would in fact perform the best in the role out of the entire applicant pool. In her application:

  • She uses detectably ‘female’ language in resume
  • She notes she didn’t start coding until the age of 18
  • She was the founding organiser of LGBTQ on campus

She also has a non-Western name and her dark skin tone made it difficult for an AI system to register her affect during a recorded video interview with a chat bot.

Faced with this data, an AI recruitment algorithm screened her out. She doesn’t get the job. She didn’t even get an face to face interview. Given the circumstances, many of us might think this was wrong.

Perhaps it is wrong because some principles such as fairness, or like treatment of like, or equality of opportunity have been transgressed. Overall, an injustice seems to have occurred.

Algorithmic Injustice

In his book Future Politics, Jaime Susskind lays out the various was in which an algorithm could lead to unjust outcomes.

  • Data-based injustice: where problematic, biased or incomplete data leads the algorithm to decide unfairly
  • Rule-based injustice
    • Overt: the algorithm contains explicit rules discriminating against some people, e.g. discriminating against people on the basis of sexual orientation.
    • Implicit: the algorithm discriminates systematically against some kinds of people due to correlations in the data, e.g. discriminating against those who didn’t start learning to code until after the age of 18 might discriminate against women due to social and cultural norms.
  • The neutrality fallacy: equal treatment for all people can propagate entrenched bias and injustice in our institutions and society.

Susskind notes that most algorithmic injustice can be traced back to actions or omissions of people.

Human Rights

Another way of formulating algorithmic ethics is in terms of human rights. In this case, rather than look to the outcome of a process to decide whether it was just or not, we can look to the process itself, and ask whether the applicant’s human rights have been respected.

In a paper titled, “Artificial Intelligence & Human Rights: Opportunities & Risks” The Berkman Klein Centre for Internet and Society at Harvard concludes that the following rights could be transgressed by the use of algorithms for recruitment. The rights to:

  • Freedom from discrimination
  • Privacy
  • Freedom of opinion, expression and information
  • Peaceful assembly and association
  • Desirable work

But it might be that case that ongoing ethical discourse could lead us to new rights in the age of AI, perhaps:

  • The right to not be measured or manipulated?
  • The right to human interaction?

Ethical Systems

The foundations for reasoning about rights transgressions or whether outcomes are just or unjust are found in ethical systems. Such systems have been constructed and debated by philosophers for centuries.

The Institute of Electrical and Electronic Engineers (IEEE) recognises this and in their (300 page!) report ‘Ethically Aligned Design’, they identify and describe a number of Western and non-Western ethical systems that might underpin considerations of algorithmic ethics. The IEEE notes that ethical systems lead us to general principles and general principles define imperatives. The IEEE list and explain eight general principles for ethically aligned design.

One example of an ethical system is what might broadly be considered the ‘consequentialist’ system, which determines right and wrong according to consequences. A popular version of this approach is utilitarianism, the ethical approach that seeks to maximise happiness. As an example, under utilitarianism, affirmative action can be good for society as a whole, enriching the experience of college students and enhancing representation in public institutions and making everyone happier in the end. This approach tends to ensure that we act ‘for the greatest good’.

Another example of an ethical system is deontology or ‘rules based’ ethics. Kantianism is a version of deontology, which argues that ethical imperatives come from within us as human beings, and the right thing to do boils down to ensuring that we treat all people with dignity, ensuring that they are never a mere means to an end but an end in themselves. This approach tends to lead to the formulations of right and duties. For example, it would be wrong to force someone to work without pay (slavery) because this fails to respect their freedom, autonomy, humanity and ultimately dignity, irrespective of the outcomes.

In their report, where the IEEE deduces their general principles of ethically aligned design from a foundation of these ethical systems, the authors note that, “the uncritical use of AI in the workplace, and its impact on employee-employer relations, is of utmost concern due to the high chance of error and biased outcome.”

The IEEE approach is not the only published declaration of ethical principles relevant to algorithmic decision making. The Berkman Klein Institute has catalogued and compared a number of these from public and private institutions.

The Gig Economy

Let’s turn now to gig work. Think of Rita, she is a gig worker working for a hypothetical home cleaning business that operates much like Uber. Rita’s work is monitored by GPS ensuring she takes the most direct route to each job, she’s not sure whether the company tracks her when she’s not working. Only time spent cleaning each house is paid, and the algorithm keeps very tight tabs on her activities. Rita gets warning notifications if she deviates from the prescribed route, such as when she needs to pick her son up from school and drop him to the babysitter. She gets ratings from clients, but one woman, an historical ethnic rival, always rates her low even when she does a good job, the algorithm warns her that it’s her last chance to do better. Rita stresses about the algorithm, feels constantly anxious and enters a depression. She misses work, has no sick pay to draw upon, and spirals downward.

We may conceive of such algorithms as ‘mental whips’ and feel concerned that when acting punitively they may be taking data out of context. Furthermore, the ethically appropriate response from the algorithm to, say, an older worker who falls ill might well be different from that to a wayward youth who slacks off. Justice may not be served by equal treatment.

Phoebe Moore has noted “[such] human resource tool[s] could expose workers to heightened structural, physical and psychosocial risks and stress.” – this is worse if workers feel disempowered.

Surveillance and the Panopticon

Many of the issues around gig management algorithms boil down to issues of surveillance.

Historic surveillance had limitations (e.g. a private detective could only investigate one employee at a time). However, with technological advance we can consider surveillance taken to its purest extreme. This is the situation Jeremy Bentham imagined with his panopticon. A perfect surveillance arrangement where one guard could observe all prisoners in a prison (or workers in a factory for that matter) at all times, without the guard being seen themselves. As soon as workers know this is the situation their behaviour changes. When a machine is surveilling people, people serve the machine, rather than machines serving people.

The panopticon is problematic for a number of reasons. Firstly, there is an unfounded assumption of innate shirking. There may be no right to disconnect (especially if the employer performs 24/7 surveillance of social media).

As with Rita there are risks that surveillance data can be taken out of context. We also know that the greater the surveillance, the greater the human demands for sanctions on apparent transgressions.

Finally, the system lacks a counter measure of ‘equiveillance’, which would allow the working individual to construct their own case from evidence they gather themselves, rather than merely having access to surveillance data that could possibly incriminate them.

Ethically we must ask, who is this situation benefitting? Employment should be a reciprocal arrangement of benefit. But with panopticon-like management of workers, it seems that some interests are held above those of others. Dignity may not be respected and workers can become unhappy. It could be argued that Rita is not being treated as an end in herself, but only as a mere means.

It’s true that Rita chose to work for the platform, and by choosing surveillance, has willingly forgone privacy. But perhaps she shouldn’t be allowed to. This is because privacy has group level benefits. A lack of privacy suppresses critical thought and critical thought is necessary to form alliances and hold those that exploit workers to account.

As a society we are presently making a big deal about consumer privacy, but what about employee privacy and protections? Ethics demands that we examine these discrepancies.

We might want to ensure that humans don’t become a resource for machines, where the power relationship is reversed, where human behavior (like Rita’s) is triggered by machine activity rather than the other way around. The risk is not that robots will take our jobs, we will become the robots living ultra-efficient but dehumanized lives

Ghost Work author Mary Gray says, “[one] problem is that the [algorithmic gig] work conditions don’t recognize how important the person is to that process. It diminishes their work and really creates work conditions that are unsustainable.” This argument contains both consequentialist and deontological points against overzealous algorithmic management of people.

Is there a duty to use algorithms?

I’ve called into question some of the possible uses of algorithms in recruitment and managing the gig economy. Potential injustice seems to lay in wait everywhere in bad data, implicitly unjust rules, and even neutral rules. But when are algorithms justified? What if customer satisfaction really is ‘up 13%’? Is this an argument for preserving the greatest happiness at the expense of a few workers? Or perhaps techniques for ‘ethically aligned design’ could lead to systems that overcome the ‘discriminatory intent’ in people and also enhance justice (dignity) in the process.

“We can tweak data and algorithms until we can remove the bias. We can’t do that with a human being,” – Frida Polli, CEO Pymetrics.

However, the duty to respect human dignity may require some limitations on the functions and capability of AI in recruitment and the management of gig work. We need to examine what limitations.

Australia’s Chief Scientist, Dr Alan Finkel, has proposed the ‘Turing Certificate’, a recognised mark for consumer technologies that would indicate whether the technology adheres to certain ethical standards. This discussion should be ongoing.

Finally, the irony that we implement oversight and regulatory force to combat the use of surveillance and algorithmic force is not lost on me…

Guyon, we are the problem, not Pharmac

cash pills.jpg

Guyon Espiner has written two pieces in the last week about Pharmac, New Zealand’s medicines buying agency. They can be found here and here.

Both pieces read as attacks. Claiming that the ‘secret’ black box processes Pharmac uses to justify funding for some medicines and not for others is causing people to die.

I do not work for Pharmac, and beyond being a citizen of New Zealand, I have no vested interest in how Pharmac operates. These attacks are unfair and completely miss the point. By singling out Pharmac as the bogey, those of us, yes all of us, who are actually the problem, get away free.

Is there even a problem?

Pharmac’s processes are consistent with international best practice in health funding prioritisation and their objectively determined ‘incremental cost-effectiveness ratios’ or ‘ICERs’ are in line with how health researchers around the world determine value for money. This is what produces the ‘quality adjusted life years per one million dollars’ benchmark that Pharmac bases most of its decisions upon. There is provision for case-by-case exception as well. There is nothing untoward here. And the results of these calculations need to be kept secret for commercial reasons in many cases.

The result of this process? Well the life expectancy of New Zealanders is about 82 years. In the USA, where every drug you can imagine is available, it is 79 years (and falling). So the New Zealand health system is doing something very right (and this includes prevention of disease, and healthy lifestyle promotion, which are all part of the health budget).

So where does the problem (if indeed there is one) lie? It would lie with the amount of money Pharmac is allocated. To repeat, Pharmac is doing the very best it can with the budget it has. So any finger-pointing needs to be directed at David Clark the Minister of Health (who allocates health funding received) and Grant Robertson, the Minister of Finance (who allocates funding to health). These Ministers in turn must direct the question to the New Zealand public. Political leaders need to offer realistic explanations of the options, without spin, to the public.

How much money should Pharmac get?

So how much money should Pharmac get? Well that depends. It depends on our (the New Zealand public’s) values and our preferences. If we value the health of those dying of cancer, and we prefer that expensive life-extending medicine be funded, then that is a completely reasonable position to take. But if these values and preferences mean that the budget for Pharmac must rise, then it is not enough to merely complain and call Pharmac unfair. That is the cowards way out.

We must propose a preferred policy for resource allocation. If we prefer that money is taken out of education and used to buy cancer drugs, then we should say that. If we prefer that Superannuation is reduced or the age of eligibility is raised, then we should say that. If we prefer that road safety initiatives are scrapped (and the road toll rises) then we should say that. If we prefer that income tax is raised, then we should say that. If we prefer to pay more GST, then we should say that. But we are silent.

If we want the government to increase funding for Pharmac, then we need to state unequivocally what we will trade and be specific. Perhaps it is best that we raise the age of entitlement for NZ Superannuation to 67 or 68. This could be a small price to pay so that those in desperate need can access medicines. Or perhaps we forgo special education support for children with learning difficulties, or perhaps we’d rather have less disposable income and simply raise taxes.

Funding of medicines is up to us

We need to vocalise our priorities and elect a government consistent with that prioritisation. Alternatively the government should offer us clear choices, and respect our decisions. It is up to us to determine how much Pharmac can ultimately spend on medicines. We should not be attacking the organisation that actually drives medicine prices down, and gets us the very best deal possible on hundreds of life-saving products.

All of the above are viable solutions to the ‘Pharmac problem’ but this just shows that Pharmac is not the problem, indeed, Pharmac is the solution to the constraints imposed. The resourcing problem is us and our unwillingness to put forward suggestions for managing the opportunity cost of buying more medicines. Given that attack politics is not constructive, I’ve previously suggested some policies relevant to Pharmac, including factors for consideration here, why we shouldn’t fund some expensive medicines here and I’ve also suggested ways to decrease the cost-burden of cancer in New Zealand here. We need solutions, not criticisms.

It doesn’t matter where you draw the value for money line, there will always be medications that sit above it. If Pharmac’s budget was doubled, there would still be patients who in theory would derive benefit from expensive but unfunded medications. The problem of prioritisation will persist. But if we uncap the Pharmac budget, then other budgets must necessarily have sinking lids.

What will we agree to forgo?

So think of this, perhaps we should all forgo some superannuation to fund a $75,000 drug that extends a cancer patient’s life by 3 months. Or perhaps NZ Super should be means tested. Those might actually be morally correct things to do. But again, the crux of the problem then isn’t Pharmac, it is us. Whenever the government suggests raising the age of NZ Super we all scream foul. Whenever the government suggests increasing GST we all scream foul. Whenever the government suggests cutting services, we all scream foul. In that case, it is not Pharmac who are morally bankrupt, it is us.

Overall, we need to vocalise the solution not the problem. Investigative reporting needs to present this wider societal and political context and not merely act as an advocate for a few.

A Strategy for Future Employment Wellbeing in the face of AI and Digital Transformation

Image result for humans and robots

AI, digital technology, and the future of work

Over recent years a number of concerns about the future of work have been raised. Many concerns focus upon the ‘robots will take our jobs’ slogan. Commentators representing technology firms tend to disagree and argue that many more jobs will be created.

Both sides are right, and we need a strategy to manage the transition to a world of employment dominated by artificial intelligence and digital technology.

The Issues surrounding ‘AI’ and the Future of Employment

  • The problem is wider than just ‘AI will take our jobs’, the issue concerns technology in general. Automation does not require AI, merely technology.
  • Almost certainly many new jobs will be created as other jobs and aspects of jobs become automated, but the key problem will be the mismatch between the needs of employers and the skills of workers.
  • The estimated cost to re-skill workers in the US is $34 billion (simply scaling this to the New Zealand population size indicates an immediate $500 million investment is needed).
  • Economic growth requires workers to have jobs (in part because profits accruing to those that own capital tend to accumulate rather than re-enter the economy).
  • The developing world will suffer more from automation as it relies on a disproportionate number of manufacturing jobs.
  • Whole industries will vanish, so it will be irrelevant whether some are automated or not.
  • Many countries are showing slowing of population growth (and the transition to non-sustaining populations). This will have interactive effects with jobs and employment.
  • Our present work paradigm links work to income, status and wellbeing. We don’t want to forgo any of these without an alternative strategy.
  • Unemployment can lead to stigma and shame. These social ills go beyond loss of income and aren’t easily substituted.
  • The value of work goes beyond income, and we don’t want to forgo this value, if we do then even with an abundance of wealth the future is inherently diminished.
  • Immigration will not be a sustainable solution in the face of global demand for skilled workers.

A Strategy for the Future

To deal with the concerns listed above, and a host of other associated concerns, we will need a strategy to manage the phasing out of some industries, redundancies as jobs and aspects of jobs are automated, and re-skilling of the workforce to allow transition to growth sectors.

We will also need to care for those workers between jobs and those unable to transition to new kinds of work.

We may further need to:

  • Weaken the connection between work and income: UBI has been suggested as a possible strategy, this may require new ways of taxing capital, and new conceptions of what constitutes capital.
  • Sever the connection between work and status: we ought to better recognize goodness, kindness, public spiritedness, charity, sustainability and similar traits. By recognizing that many capitalist practices degrade our environment or exploit psychological weaknesses for profit, we can start to recognize that some high ‘status’ individuals are actually anti-community.
  • To encourage innovation and novelty we may need to recognize that just as access to water, sunlight, wind and tides is a right for the entrepreneur, access to data/information, processing power, intelligence, and so on are rights in the digital age.
  • We need to free up knowledge/IP for common use so innovators can stand on the shoulders of giants and devise solutions to pressing problems.
  • We need to curtail the entrenchment of power and growth of inequality because we will need a more equal population for markets to function as intended.
  • We may want to protect the right to work (and therefore be productive in the programmes described next).
  • Beyond this right, we should move to an obligations based economy, rather than a rights based one. Where companies are obliged to sustain a quota of jobs based on turnover, or fund government programmes of socially valuable jobs/stipends. These may include funding: environmental care programmes, elder and childcare, teaching, the arts, sports, etc).
  • If it really is true that as many jobs will be created as are lost, then such programmes will never be needed, so if businesses selling the ‘plethora of future jobs’ dream truly believe their promises, then they will have no qualms about supporting this regulation and consumer confidence will grow.
  • If total job numbers do decrease we may need to reduce the working week (and treat work as a scarce resource), perhaps in conjunction with raising the age of superannuation as populations age, allowing us to enjoy leisure time throughout life, rather than all in retirement.
  • We will need to provide training and education to aggressively upskill workers. Finland has already taken some steps towards basic AI training for 1% of its population. New Zealand will also need to grow AI (digital) talent.
  • What businesses want now will change rapidly so the focus should be on building fundamental capability from the ground up. This will require us to Teach AI and digital skills in schools and to the unemployed.
  • To encourage retraining we will need to forgive student debt (and provide low interest loans and free schemes to redo education).
  • We need tax breaks for the self-employed who undertake relevant courses part time.
  • We need to move to a job market based on skills like interpreting outputs and data, critical thinking, and evaluation thinking (so we can productively work alongside robots).
  • We may need a managed decline of population if jobs really do become scarce. This has the added bonus of solving all kinds of other problems, such as climate change.
  • We may need urgent research into the economics of degrowth (as opposed to recession, see links to commentary on this approach below).
  • We must ensure that worker protections are a trampoline not a safety net. This should entail short term, high investment in those who lose jobs
  • We should consider paying users of digital applications for their attention (in an ads based economy).
  • We must recognize that experimentation will be required and we must move past conservatism (remembering that stasis helps the oppressors, never the oppressed).
  • We need to plan for all of the concerns at the start of this blog, so IF they occur we can respond with the plan we have already devised.

Overall we need to:

– Have ethical debate about the future. We need to decide what we want the future to look like? What would constitute wellbeing? Is it profit? Or work? Is it exponential growth? Or de-growth?

Further reading: The de-growth economy

See Jason Hickel’s blog (a global inequality researcher and Fellow of the Royal Society of Arts)

https://www.jasonhickel.org/blog/2018/10/27/degrowth-a-call-for-radical-abundance

https://www.jasonhickel.org/blog/2018/11/1/a-simple-solution-to-the-growthdegrowth-debate

%d bloggers like this: