Ideas Blog

The algorithm, the asshole, and the virus

Social media is killing us with COVID-19

In December 2017, six months after writing a paper about how islands like New Zealand should use complete border closure as a rational pandemic response, I was giving a talk about artificial intelligence as a threat to democracy and free will to a room full of philosophers in Dunedin, New Zealand.

At the time the world had never heard of a company called Cambridge Analytica nor had any inkling of SARS-CoV-2. Unknown to me, at that exact time, Jaron Lanier was writing a (much better than my talk!) book about the malignant impact of social media on our wellbeing and society. It was called ‘10 Arguments for Deleting Your Social Media Accounts Right Now’ (Note, Lanier is not the titular asshole of this story).

After dabbling in the philosophy of AI, I returned to pandemic threats, publishing a blog in November 2019. Right when a bat was infecting the first Wuhan citizen with SARS-CoV-2, the blog provided commentary on the Global Health Security Index (GHSI), an index that conveyed a grim assessment of pandemic preparedness around the world.

In the present blog, I want to draw these disparate and seemingly unrelated related strands together. My thesis is that the business model of social media has played a critical causal role in the deaths of probably hundreds of thousands of people due to COVID-19.

‘Delete your social media right now’

Lanier was an early virtual reality developer, and has been involved with Internet2, Google (which bought his company), Second Life, LinkedIn, and a host of other digital projects. He is also a classical composer. He argued in his 2018 book that we would all be better off deleting all our social media accounts ‘right now’ and he has no social media himself at all.

In his entertaining yet serious way Lanier describes the business model of many social media platforms as: ‘Behaviors of Users Modified, and Made into an Empire for Rent’, aka BUMMER.

As we have come to understand, the social media platforms effectively sell users attention to advertisers (Lanier says ‘manipulators’) and the platforms deploy ever evolving algorithms that serve up content shown to enhance ‘engagement’.

The algorithm

What was not foreseen, but is now well understood, is that the algorithms soon learned that serving up individually tailored, controversial, emotional and negative content not only enhanced engagement, but turned us all into self-obsessed assholes, at the same time undermining truth, empathy and happiness (among other things). The details of all this can be found in Lanier’s book. Furthermore, it was not only this Unforeseen Disaster that has led to these problems, we also see Gaming the System on the part of content creators (eg Media hacks) and subversive elements (eg Russian trolls)

Lanier’s arguments for deleting all social media boil down to the following list of negative effects that the BUMMER business model has on individuals and society.

  1. We’re losing free will
  2. We must resist the insanity of our times
  3. We’re becoming assholes
  4. We’re losing truth
  5. What we say is becoming meaningless
  6. We’re losing our empathy
  7. We’re becoming unhappy
  8. We’re losing economic dignity
  9. Politics is becoming impossible
  10. We’re losing our special personhood

One example Lanier gives is the emergence of the Black Lives Matter movement (the first time around), which was facilitated by the internet and social media. So far so good. But then the algorithms determined that there was a subgroup of white, right-leaning, American nationalists who engaged tremendously with the platform whenever they were served BLM content. No doubt this is how future civil wars begin.

Lanier even gives the example of social media’s causal role in the malignant growth of the antivax movement. A perverse effect of the BUMMER algorithms is not only do they serve up antivax material because it pushes the right (or wrong!) buttons for various people, but then online marketplaces like Amazon will serve up antivax book suggestions because the user has been reading antivax material! This digital perpetual engine drives resoundingly crackpot content onto the bestseller list. Lanier puts it plainly in ’10 Arguments’ when he states that ‘BUMMER kills’. It literally does.

Which brings us to COVID-19.

The asshole

Donald Trump (the asshole of this story), having evolved into even more of an asshole through his literal addiction to Twitter and its empathy destroying asshole-o-genic psychological impacts, proceeded to divide the United States (and therefore the world) on almost every issue to do with COVID-19. Amplification of this messaging by social media algorithms, which served each item to those most likely to be pissed off by it, and by media outlets who created content most likely to piss everyone off (they know how they algorithms work and want their hits), consolidated the in-group/out-group psychology of left and right. Suddenly, a million people are dead worldwide in part because angry Trumpers (or Bolsonaros, or ‘Sovereign Individual’ Australians) won’t wear masks or stay home (nor sacrifice anything of relatively minor import in the interests of public health).

Remember the GHSI, I mentioned at the outset, used for scoring health security? Well the USA topped the index with 83.5/100. Ironic, I know.

The virus

In an article published by Sawyer Crosby (and others) on the ‘Think Global Health’ website, the authors ridicule the GHSI. They basically argue that we clearly have no understanding of how to measure health security if there is demonstrably no correlation between GHSI scores and COVID-19 outcomes (in fact, as at 31 July 2020, there was a correlation, and the GHSI was positively correlated with COVID-19 death per capita – at first glance it literally couldn’t get much worse for valid measurement).

However, any attempt to rank COVID-19 responses is surely premature. In the first instance, we don’t actually know how many cases there are globally (we do know there are at least 5 million in the USA!) An MIT study indicates that there are probably more than 100 million cases worldwide. This means that present estimates such as those compiled by worldometer are out by an order of magnitude due to undercount. Many countries are simply unable to count their cases as noted by this article in the Guardian. The flip side of a low GHSI score is that the country most likely has a low capacity for situation awareness as the pandemic hits (hence reports ‘low cases’).

However, I reiterate, we know for sure that nearly 200,000 US citizens have died. Other reasons why we can’t yet know which countries are doing well include:

  • COVID-19 ‘success’ will depend on the strategy chosen by each country (eg, exclusion, elimination, suppression, or mitigation)
  • countries impacted later can learn from those impacted earlier
  • the pandemic is still accelerating
  • countries that have done well so far may yet be overwhelmed
  • countries performing ‘poorly’ at one point in time may yet look successful in the future, eg, if they develop vaccines and roll out vaccination quickly

So, do we know how to measure (and therefore construct) global health security? Yes. But building the boat and sewing the sails and taming the wind will not get us anywhere if social media is telling people to jump overboard.

Let me be clear. No one invented social media with the intent of throwing people overboard, but the interaction of the algorithms, the assholes and the virus have almost certainly amplified the number of deaths in the United States and likely elsewhere. And those with an interest in a weakened USA are likely fanning the flames. We don’t know yet if the literal attacks on public health professionals in the US, as described in the Journal of the American Medical Association, were incited by domestic division, or Russian bot armies.

The USA should have been the country best positioned to deal with COVID-19 effectively and safely. Yet social media platforms have hijacked our cognitive biases, produced a general decline in civility in political discourse, and hardened value conflicts. Look at this figure which contrasts a simulation where a group of people with disparate opinions interact (on the left) to form an agreed opinion, with a simulation (on the right) where algorithmic bias feeds content similar to already held beliefs to the individuals. The result in the presence of algorithmic bias, is a persisting dichotomy of opinion (study available here).

We know that people follow public health guidance when they believe that officials understand the public’s values and that ‘people like me’ can help make decisions. When everyone scrolls through a highly individualised social media feed, there is no such thing as ‘people like me’.

In attempting to attain health security we have not adequately accounted for social media and the machinations of BUMMER. We did not consider that people might drink bleach because the president suggested so. We have tried to include measures of ‘political stability’ in the calculus of health security, but we did not consider the supreme assholery of ‘sovereign individuals’.

The future

In the big picture, COVID-19 is minor blip. The world will go on. But if there ever was a truly existential threat, a devastating bioweapon unleashed across the globe that required everyone to take action to ensure planetary safety, then under the current regimen we are screwed. We need a mechanism for coordinating or it will be the end of us. BUMMER must end. Health security must demand it. We need a period of wisdom and coordination to prepare for the greatest of threats. The alternative is that social media explains the Fermi Paradox.

Policy Quarterly Journal: Focus on COVID-19 and New Zealand

The Victoria University of Wellington policy journal Policy Quarterly has just published a theme issue examining COVID-19 and the New Zealand context.

Articles cover: government, law, globalisation, health, lockdown, the economy and much more.

Included is a paper on ‘Public Health Aspects of the Covid-19 Response and Opportunities for the Post-Pandemic Era

In this paper (which I co-author), we argue that New Zealand’s health outcome appears to be the best in the OECD, but that some other countries made better use of certain control strategies. For example, Taiwan avoided an intense lockdown by focussing more strongly on immediate and intensive border control measures. Taiwan also excelled at using digital contact tracing and widespread use of face masks.

We suggest that in the post-pandemic era there are many emerging opportunities for society to be gained by embedding better plans for controlling future emerging diseases, and strengthening public health infrastructure.

Moving forward there is an opportunity to embed other important changes such as a ‘green reset’ and other pro-equity health interventions in system changes that will naturally follow the COVID-19 pandemic.

Worse than COVID-19: More can and must be done to prevent the greatest threats to human survival

Matt Boyd, Nick Wilson

Growing mushrooms: just one way to protect humanity in a period of reduced sunlight

“Governments routinely ignore seemingly far-out risks. Rocked by a global pandemic, they need to up their game” (The Economist, 27 June 2020).

It is not clear whether risks that threaten human extinction have received appropriate attention at the level of international governance. We systematically searched the documents of the UN Digital Library and concluded that they have not. Our results and commentary were recently published in the international journal Risk Analysis. In this blog we give an overview of existential risks, our findings and possible international and national solutions.

Existential risks

The COVID-19 pandemic is clearly a very serious global disaster, but there are threats much more dire than COVID-19. These include threats that have a long history of international attention, such as a US-Russia nuclear war, and also those that are new or less familiar such as technological risks including geoengineering or synthetic biology. In the extreme some large-scale global catastrophes could threaten human extinction. The following is a list of some plausible threats to human survival:

  1. Nuclear winter (the sun is obscured by soot from burning cities following a nuclear war)
  2. Artificial intelligence (AI; machines are developed in the future with goals that are not aligned to those of humanity and wreak havoc)
  3. Synthetic biology (engineering principles are used to produce dangerous biotechnology, including devastating bioweapons)
  4. Geoengineering (modification of the atmosphere or oceans to mitigate climate change goes wrong)
  5. Nanotechnology (nano-scale engineering creates a runaway process that degrades the environment)
  6. Asteroid/comet impacts (a large object(s) collides with the Earth causing mass extinction as with the dinosaurs)
  7. Supervolcanic eruption (massive volcanic eruption causes a decades long drop in Earth’s temperature)
  8. Experimental physics disaster (high-energy physics experiment creates a devastating physical process such as a black hole)

This list of existential risks is not exhaustive and others include: risks of catastrophe due to biodiversity loss, ecosystem collapse, societal collapse, solar storm, a flood basalt event, a close supernova/gamma-ray burst/magnetar explosion, or even attracting the attention of harmful extra-terrestrial intelligence. There are also as yet unknown risks.

Many of these threats are not stand-alone threats but could combine with other risks. We can imagine scenarios where AI is deployed to aid the development of dangerous biotechnology, or where a pandemic emerges in a period of low health security following a nuclear war or comet impact.

Active mitigation of extinction threats is justified by the perspective of long-termism, which is grounded in the vast expected value of future human lives and the common desire to preserve aspects of the “human project,” such as our intergenerational cultural, scientific, and technological endeavours.

Results from our analysis of the UN Digital Library

We examined the UN Digital Library for evidence of any general international discussion about risks that threaten human extinction and also for evidence of discussion of the eight specific existential threats listed above.

Our search for 22 synonyms of existential risk in the UN Digital Library returned 97 relevant mentions. Over two-thirds (69%) of these pertained to nuclear war. Climate change was the threat mentioned 24 times, however in these cases the context was often an existential threat to island states rather than humanity as a whole. There were a handful of references to existential threats in the context of general disarmament or weapons of mass destruction.

Strikingly, searches of the UN Digital Library revealed few if any other categories of existential risk raised in a manner that made the threat of human extinction salient.

UN documents that explicitly discuss human extinction have a limited focus

On the basis of the keyword search it appears that the UN has a long history of addressing the threat of nuclear war and has engaged with the threat from comets and near-Earth objects through the Committee for the Peaceful Uses of Outer Space. These results seem to indicate a lack of attention paid to most existential risks.

Why is there little attention to existential risks?

There are clearly competing demands on national and international policymakers. Immediate threats such as regional conflict, trade, poverty, local health and education issues, as well as environmental concerns, weigh heavily and cannot be ignored. Yet the COVID-19 pandemic has demonstrated the economic and human devastation that arises if low probability or infrequent but catastrophic hazards are ignored.

Mitigation of existential risk is a global public good. We’ve seen with climate change how large-scale cooperation is needed to counteract the tendency for markets to undersupply such goods.

It is also the case that international policymakers are not well acquainted with considering human extinction. Theory and frameworks may be necessary to facilitate the right discussions. However, classification frameworks for severe global catastrophic risk scenarios now exist and can aid in exploring the interplay between many interacting critical systems.

Us humans are also subject to psychological biases that may prevent action. Time discounting means that we tend to prefer value now to in the future. This is unfortunate for future people and the intergenerational nature of the benefits of existential risk mitigation. Future people perhaps stand to benefit most, yet they lack a voice in present policy decisions. This needs to change and the rights of future generations could be enshrined in the universal declaration of human rights.

The very fact that humanity has not yet gone extinct might also lead to neglect of extinction threats. However, this would be a mistake. We may just have been lucky to date, and changes to our situation including new technological developments can shift the odds.

Some existential risks are new (such as AI and synthetic biology) and it may take time for them to filter through to political discussions. However, given the potentially long time-lag from substantial and wide-ranging discussion to effective mitigation, this does not mean that attention can be deferred.

International solutions

Early research on existential risks focused on the kinds of threats listed above as isolated exogenous events. However, these hazards cause harm because human societies are vulnerable to harm. Also, large scale risks are inextricably linked to governance failures, they are not merely challenges for governments to overcome. It is not clear that we are developing, deploying, or governing our technology with enough wisdom. This means that as well as implementing safeguards, we should also expect safety systems to fail and have a backup plan to mitigate the impact and survive these catastrophes if prevention fails (see below).

We are right to continue to be very concerned about nuclear war and major asteroid/comet impacts and should try much harder to prevent them. However, major Earth-impacts (although able to strike at any time) are extremely low probability events. Therefore, we ought to be more concerned over perhaps a five- to ten-year period, with developments in synthetic biology and AI. The power of these technologies is advancing rapidly, and we may need important norms and international regulations to prevent dangerous use by states, institutions or individuals.

There are four obvious things that member nations could lobby the UN to do:

  1. Ensure that relevant bodies exist at the UN, similar to the UN Office for Disarmament Affairs (nuclear weapons), or the Committee for the Peaceful Uses of Outer Space (asteroid/comet impacts), to study and effect mitigation, and to coordinate the response to each specific risk.
  2. Ensure there is an overarching body on existential risk across these committees that addresses existential risk as a category and focuses on vulnerabilities and resilience, rather than any single particular risks. This is important because the probability, magnitude and tractability of each threat vary, and resource allocation must be prioritised. By taking an approach across a portfolio of risks, and working on quantitative risk assessments, which account for hazards and vulnerabilities, this body would then be able to recommend which risks justify greater or lesser immediate resources.
  3. Enshrine the rights of future generations: the UN Human Rights Council might consider options for approaching the rights of future generations. Any such rights, should they be deemed relevant and possibly enshrined in the UN Declaration of Human Rights, could have a significant effect in guiding mitigation action across UN member nations.
  4. Develop a convention against omnicide, and any technology that could facilitate omnicide (such as possession of more than 100 nuclear weapons, possession of particular types of bioweapons, development of environmentally devouring nanomaterials, human germline manipulations causing sterilization, and so on).

National solutions

Given the immense consequences of existential threats, international governance should clearly expend some resources to study how to prevent and mitigate these threats. However, organisations such as the UN arguably have a chequered record of responding to crises. Also, some existential threats may not require a global response. Therefore, national governments, communities and individuals should all do what they can to help mitigate the threat (eg, the US Government unilaterally invests in asteroid detection).

At the level of national government, dedicated departments can study and monitor a portfolio of catastrophic risk, allocating resources to those threats with the largest expected impact (on the basis of probability, magnitude, tractability and neglectedness). In February 2020 we published a paper on AI that discusses this approach, in the context of New Zealand.

Prevention, resilience and recovery

Government action should address prevention, resilience and recovery. Prevention might include multilateral disarmament negotiations, revisions to the International Health Regulations to ensure the world is prepared for a catastrophic pandemic, and regulation and oversight to ensure the safety of technologies such as AI, nanotechnology, geoengineering, and synthetic biology. Prevention might require very rapid action at the time of catastrophe. With COVID-19, New Zealand had the luxury of learning from other nations and imposed border controls just in time. Future catastrophe could strike a country like New Zealand first, and so rehearsal, simulation, and walk throughs of key actions are needed ahead of time.

Resilience could include economic preparedness. In the case of New Zealand, the Earthquake Commission model could be enhanced and extended to all catastrophic threats. An investment of 0.5% of GDP per annum could provide in the order of NZ$100 billion per generation to deal with unprecedented catastrophe and could have been accessed for the COVID-19 recovery. Pandemic reinsurance products briefly existed. These married superannuation funds (which save on pay-outs when there is a lot of death) with businesses (which suffer losses during pandemics). However, these no-brainer products were not popular with businesses that clearly felt pandemic insurance was not needed – but such thinking may now be changing. Many other creative ways to fund catastrophe may exist.

Resilience can also be built at the level of individuals, communities, and local governments. Researching and implementing strategies to help individuals and towns ensure food production in a world with a period of reduced sunlight would provide resilience against nuclear winter, supervolcanic eruption and asteroid/comet impact.

Recovery might hinge on some region or population avoiding a global catastrophe and being well-positioned to re-seed the Earth with people, technology and know-how. Partitioning the population to escape a catastrophic pandemic or facilitating survival in islands geographically most likely to endure a period of reduced sunlight could help. International law might need to be addressed. For example, the International Health Regulations actively deter restrictions to travel and trade to combat pandemic disease. This may need to change to empower island nations to close their borders and provide a reservoir of human capital and technological know-how to rebuild civilisation after a catastrophe.

Summary

Existential risks appear neglected by international governance. COVID-19 shows that we must invest time and resources to understand large scale risks. We must also begin preparations to mitigate the most general effects of these threats. This includes implementing appropriate oversight and safety engineering of potentially dangerous technology, building resilience to survive a world with a period of reduced sunlight, and planning to partition humanity so that risks cannot spread to every last grouping of humans.

We risk being limited by our naïvety of many complex processes and may require new methodology and cross-disciplinary work to evaluate these threats. Governments would do well to begin by bringing the full range of domain experts to the table.

Further Reading

  1. Boyd M, Wilson N. Existential Risks to Humanity Should Concern International Policymakers and More Could Be Done in Considering Them at the International Governance Level. Risk Analysis. 2020; online first, doi: 10.1111/risa.13566.
  2. Boyd M, Wilson N. Existential Risks: New Zealand needs a method to agree on a value framework and how to quantify future lives at risk. Policy Quarterly. 2018;14(3):58–65.
  3. Ord T. The Precipice: Existential Risk and the Future of Humanity: Bloomsbury; 2020.
  4. Cotton-Barratt O, Daniel M, Sandberg A. Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter. Global Policy. 2020: doi: 10.1111/758-5899.12786.

The Global Health Security Index (GHSI) is a good starting point on the road to minimising biological threats

A new paper explains how the GHSI can be used to guide preparations in New Zealand and the Pacific

New Zealand has done well to date in managing and eliminating (for now) COVID-19 disease. However, there have been sensible calls for an inquiry into New Zealand’s response. An inquiry could help determine what worked and what didn’t, what went well, and how we could do better next time. An inquiry should focus on ways to get the same result while suffering less harm (economically, socially, and in terms of health).

The Global Health Security Index

Fortunately, there is a lot of guidance for planning, such as The Global Health Security Index, which in 2019 independently scored countries on 140 items relevant to health security (NZ scored 54/100). This index highlights the diverse factors that go into protecting health by preventing, detecting, and responding to a threat, as well as the robust health system, commitments to international norms, and a well-managed risk environment that reduce the chance of harm.

Work should begin now to determine how the GHSI can help to prevent and manage the next threat. A paper published today (12 June 2020) in the New Zealand Medical Journal does exactly this, by explaining how the new GSHI can help guide New Zealand’s planning for biological threats, and also help enhance Pacific regional health security.

Improving preparations for biological threats

The paper on GHSI published today describes NZ’s score, as well as those of our Pacific neighbours. It outlines how New Zealand might better prepare and improve its resilience against the next threat. It also explains why aiding Pacific nations to enhance regional health security is critical. Given the potential for rapid spread of disease, if some are not prepared then no one is fully prepared.

The risks of anthropogenic threats (lab accidents, malicious use of biological weapons) is high, and the damage from these could be vast. Fortunately, there are many other tools in the GHSI toolkit that ought to be implemented now, to prevent accidental, unforeseen, unprecedented, or malicious risks before they occur.

These include screening of all DNA synthesised to order, consolidation of dangerous materials in a minimum number of laboratories, and standardised biosafety and biosecurity training. We also need a plan to protect vulnerable populations, such as residents in aged care facilities.

We need a generic pandemic plan, fit for all purposes, that is not merely based on pandemic influenza and a ‘let it in and manage it’ approach. Beyond pandemic planning however, there is much that can be done to reduce future biological risk. COVID-19 was nothing like influenza and the next threat could be nothing like COVID-19. Imagination is needed.

Border closures

Perhaps most critically, we need to better understand the criteria for border closure and border management mechanisms. Repeatedly over the last 3 years, prior to COVID-19, the Ministry of Health had been adamant that border closure would never be used. And yet, the first thing that New Zealand (and many other nations) did in the face of the COVID-19 pandemic was to start to close, and then completely close, the border.

It is entirely possible that the economically crippling level 4 lock-down might have been avoided with well-planned, well-rehearsed, early border closure according to pre-determined criteria. More thinking about when to implement border closures is needed.

Beyond GHSI

But we must also think beyond the GHSI because no single tool anticipates everything, and recent events (in NZ and around the world) demonstrate the importance of managing misinformation and disinformation campaigns, of avoiding the politicisation of public health emergencies, of quick decisive action, and of equitable and universal access to healthcare. There is much more that we can do to preserve human health in New Zealand and regionally in the face of inevitable future biological threats.

COVID-19 and Google mobility data in New Zealand and other countries

Following a recent post on the University of Otago’s Public Health Expert blog, Matt from Adapt Research discusses COVID-19 with Radio New Zealand’s ‘The Panel’

Read the Blog here: https://blogs.otago.ac.nz/pubhealthexpert/2020/04/12/changes-in-mobility-in-response-to-the-covid-19-pandemic-nz-vs-other-countries-and-the-stories-it-suggests/

Listen to the discussion here (6min): https://www.rnz.co.nz/national/programmes/thepanel/audio/2018742758/google-data-and-what-it-says-about-nz-s-lockdown

Alert Level 4 will minimise long-term pain

The only way to contain coronavirus cases in NZ is to go to Alert Level 4 for a brief period.

New Zealand currently has an unknown number of coronavirus cases wandering around our community. A number of observations support this, first is the experience of many other countries, second is the sudden growth in confirmed cases, third is that we have a cohort of international arrivals (and returning New Zealanders) who came to NZ between March 5-15th. This means they arrived before the mandatory 14-day quarantine, but still within the virus incubation period. Virtually all our new cases are from this cohort.

We now have a 4-level alert system. This is an excellent tool to get everyone on the same page and clearly to communicate both risk and required actions.

We should be at Level 4 right now.

This would allow us to reduce to Level 3 and then Level 2 as quickly as possible and avoid protracted disruption. I will explain.

The target we are chasing is invisible and our confirmed data is always behind the brute facts of the situation. In a situation like this the ONLY strategy that can succeed is to draw the circle of containment as wide as possible and then move inwards. This means setting Alert Level 4 right now, stay at home.

Over the next 12 or so days, ALL the coronavirus cases would then reveal themselves with symptoms developing (or if it is a mild case the infection passing almost unnoticed) without infecting anyone else. Once we have identified ALL the cases, then they go into isolation and we have stopped the spread.

After two weeks, towns, cities or regions with no cases revealed can be set back to Level 3 and even Level 2 and business can carry on. Towns or regions with cases might hold Level 4 for another week. But everyone can very quickly get back to Level 2.

We can then release the strict controls and just deal with one case at a time as they crop up, all of which (theoretically by that point) will be cases of returning New Zealanders who will reveal their symptoms while in 14-day self-quarantine and this will not be problematic.

We then implement widespread temperature checks, hand washing, stigmatization of coughing etc. But business can carry on.

What this implies

Whether Winston Peters’ encouraging of New Zealanders overseas to return is helping or facilitating the problem. Places like Wuhan now look relatively safe. As long as people don’t transit through airports where they could catch the virus, there are many places that might be safer than attempting to return, and the likelihood of bringing disease back into a quarantine zone.

In the future when we get warning of a novel outbreak we ought to immediately and dramatically close down the world for 2-3 weeks for exactly the reasons that I’ve outlined above. This should have happened around Jan 20th. Previous times that we might have taken this approach would have been with SARS, Ebola, MERS, and now Coronavirus. This is about 4 times per 20 years, or about 20 times per century. It is far preferable to suffer 2 weeks of GDP losses twenty times than 18 months once. This is a no brainer, but the mechanisms need to be coordinated ahead of time. A 20:1 false positive rate needs to be seen as acceptable.

Hopefully, New Zealand decision makers see the logic in this and raise us to Level 4 today, for a period of 2 weeks to minimise our losses.

In the face of catastrophe: The rationale for border closure and other significant steps

New Zealand has just shut its borders to foreign visitors for the first time in history in an attempt to mitigate the impact of COVID-19. In what follows, I outline the case for border closure, the conditions under which it is rational, and then note that this is merely one measure against one catastrophe. Moving forward, we should get used to analysing and preparing responses to such large-scale risks.

Closing the border in a pandemic

Pandemics impose costs on society. These costs include up front hospital costs, ICU costs, lost productivity due to illness, and death, and downstream costs including human and emotional toll and long-term productivity loss.

Closing the border also imposes costs, these include disruption, visitor revenue forgone and more than likely reductions in trade and business downturn. But importantly, closing the border can also fail, if a significant amount of disease gets into the country anyway (eg from returning kiwis), and if there is a substantial outbreak despite border closure then the costs of closure and the costs of the pandemic are incurred.

Keep the border open in a pandemic or keep the border closed, it’s a lose-lose situation. However, previous research has demonstrated that provided a pandemic is serious enough, meaning that a sufficiently large proportion of the population are likely to get sick (say, 40%) and the disease has a sufficiently high case fatality (say, very much worse than Spanish flu of 1918) then the costs of border closure pale in comparison to the costs of the pandemic. Border closure is then a no brainer.

Border ‘filtering’ and border ‘closure’

But there is border closure and then there is border closure. The current New Zealand policy, let’s call it ‘filtering at the border’, allows for the free passage of returning New Zealanders, provided they agree to pass through the ‘airlock’ of 14 day quarantine, and also allows for New Zealanders to leave and then return again (although this is advised against). Both processes could bring disease into the country. A policy of ‘complete’ as opposed to ‘filtered’ border closure would prohibit the movement of kiwis too.

Would ‘complete’ border closure ever be justified. Possibly yes. Imagine if the case fatality rate of COVID-19 was 100% instead of 1%. There could really be no justification for risking anyone entering the country at all until the pandemic was over. If such a case ever arose, then immediately sealing the border and becoming a ‘refuge’ is a completely rational approach.

But back to ‘filtered’ closure. The hope is that by filtering out foreign arrivals, then the impact of COVID-19 will be much less. This could still prove fruitless, either if there is established community transmission from the cases already here, or if returning kiwis bring the virus home and spread it. In our previous research we modelled a scenario where border closure is attempted but fails, resulting in 90% of the unmitigated case-load. For a pandemic of more than 10 times the severity of Spanish flu, this was still a cost-effective measure (minimizing losses) under some assumptions. However, for swine flu (2009) it was never cost-effective to close the border.

How bad is COVID-19?

COVID-19 is an intermediate case. Under the ‘base case’ model of 40% infected, half being asymptomatic and 1% case fatality for the rest, it could cause 1 million symptomatic cases, and 10,000 or more deaths in New Zealand. This is a huge human, social and economic burden with ramifications for our economy for many years. But these costs (the economic ones) only barely balance out the lost visitor revenue from say 6 months of border closure. So, at first glance the case for closure is equivocal.

If trade continues, though curtailed by a policy of border closure, then the numbers become important. If closing the border (successfully) means the pandemic costs you $11 billion less than it would have otherwise (which is the result of modelling 72% infected and 2.3% dying, hospital costs, productivity loss, death, etc), then it is worth closing the border even if there is going to be an $11 billion drop in trade over the period of border closure (about 20% of trade for a six month period).

What is hard to predict, but is becoming clearer, is the global impact of the pandemic on travel and trade. If tourism drops to zero despite our actions, and trade plummets too, then the economic benefits of closure compared to no closure, suddenly appear much greater. You lose nothing by closing the border if there is no tourism and no trade anyway.

These are the numbers that need to be evaluated for a range of scenarios. And on the basis of these numbers COVID-19 looks to be right in the middle, between swine flu and disaster.

This means the decision may come down to factors other than strict economic ones and three factors play a key role: equity, access to care, and the value of a ‘quality adjusted life year’ (basically one year of good health).

Equity, access and value of life

COVID-19 differentially impacts the elderly. They suffer more severe illness on average and bear the brunt of deaths. This may also be true of other minority groups in New Zealand, we don’t know this yet. Our health system places a lot of weight on equitable treatment and prioritising ventilators is not equity. The best strategy to make sure everyone has a fair chance might be to simply close the border and keep it out.

COVID-19 places an immense burden on critical care facilities, and our modelling suggests that with an infinite supply of ICU beds COVID patients could occupy enough of them to cost the health system $1.9 billion or more. However, we only have enough beds that $270 million worth could ever be continuously in use over a six-month period. So, the analysis suggests costs well in excess of those that would be realised (ie the outbreak seems to get cheaper). But this results in unmet health need. And likely would result in preventable deaths. Which have a long-term cost themselves. Again, these are the factors on the scales that need to be weighed.

The modelling research we did used Treasury figures for the value of a year of good health, which are derived from Pharmac funding decisions. Basically, how much does Pharmac spend on medicines that lead to a gain in one year of good healthy life?

It may be that we New Zealanders decide to put much more value on healthy life than this (double? triple? it can’t be infinite or there would never be such a thing as a health budget, it would be a blank cheque). But this new valuation, if imputed in the models, can change the outcome of these economic analyses.

So, closing the border to protect vulnerable populations, to ensure fair access to care, or to save lives can be a very reasonable decision, even when it is not strictly an economically rational one.

Border closure can still fail

Bear in mind that border closure could still fail and the probability of failure should be built into economic modelling. We could still end up with just as many cases as in the no closure scenario. Many other variables in this equation (such as the exact epidemiological parameters of COVID-19 in certain contexts) also remain unknown.

Given the high degree of uncertainty, it may be rational to close the border to assess the situation. This is because, although borders can be re-opened at a later date (even re-opened almost immediately), we may lose the window of opportunity to close them if we don’t act now. This decision and the precise timing are very difficult, and although detrimental economically, it may be wise to move fast, as the government has in fact done.

But now we risk a very long waiting game. The virus is likely to be controlled in some regions of the world and not in others, resulting in a patchwork of ‘hot’ and ‘cold’ zones and severe travel disruption until a vaccine is available. There is almost a paradox in being too successful. If we keep it out then we must sustain our vigilance for the long haul or risk an outbreak every bit as bad as it can be, just down the track.

Global Catastrophic Risks

COVID-19 is a very harmful event, but it is not a global catastrophic threat. There are worse scenarios waiting in the shadows, and we must be prepared for them. Our experience with COVID-19 and the substantial fall-out will hopefully now prime decision makers to be receptive to the idea of building resilience against catastrophic risk. There are far more devastating biological threats that could arise, whether naturally or through biological manipulation, there are threats from other technologies, such as artificial intelligence (AI), geo-engineering, nuclear technologies, as well as threats from climate change, supervolcanoes, near earth objects, and others. COVID-19 has shown us how fragile our just-in-time systems are, how pursuit of a point of extra return can leave us undercapitalised, without cash flow, without inventory, at the mercy of global forces.

The government ought to take a risk portfolio approach to global catastrophic threats, and invest in assessing the probability of various risks, their magnitude and how to build resilience against them.

There is no reason why we can’t have an EQC-style fund to protect against risks that hit only once per generation, but hit hard. There is no reason why we couldn’t have walked through scenarios like COVID-19 with all sectors collaborating to identify the bottlenecks such as ventilator availability, testing locations, health workforce, data systems to track those in quarantine and so on.

The current event involves a biological virus, what if the next event involves a digital threat, would we close our internet borders? What is the case for doing so or for not doing so? This all needs to be scenario-ized.

In fact, in a paper published last week, we made this call with respect to AI, and urged the government to address global catastrophic risk in a systematic and pragmatic fashion.

The first significant step is to institutionalise the learnings from COVID-19, the phrase ‘global catastrophic risk’, and a commitment to undertake research, analysis, planning and systems testing much more often and more robustly.

Are Humans ‘Human Compatible’?

Human Compatible

I’ve just spent the last three days reading Stuart Russell’s new book on AI safety, ‘Human Compatible’. To be fair I didn’t read continuously for three days, this is because the book rewards thoughtful pauses to walk or drink coffee, because it nurtures reflection about what really matters.

You see, Russell has written a book about AI for social scientists, that is also a book about social science for AI engineers, while at the same time providing the conceptual framework to bring us all ‘provably beneficial AI’.

‘Human Compatible’ is necessarily a whistle-stop tour of very diverse but interdependent thinking across computer science, philosophy and the social sciences and I am recommending that all AI practitioners, technology policymakers, and social scientists read it.

The problem

The key elements of the book are as follows:

  • No matter how defensive some AI practitioners get, we need to all agree there are risks inherent in the development of systems that will outperform us
  • Chief among these risks is the concern that AI systems will achieve exactly the goals that we set them, even if in some cases we’d prefer if they hadn’t
  • Human preferences are complex, contextual, and change over time
  • Given the foregoing, we must avoid putting goals ‘in the machine’, but rather build systems that consult us appropriately about our preferences.

Russell argues the case for all these points. The argument is informed by an impressive and important array of findings from philosophy, psychology, behavioural economics, and game theory, among other disciplines.

A key problem as Russell sees it, is that most present day technology optimizes a ‘fixed externally supplied objective’, but this raises issues of safety if the objective is not fully specified (which it can never be), and if the system is not easily reset (which is plausible for a range of AI systems).

The solution

Russell’s solution is that ‘provably beneficial AI’ will be engineered according to three guidelines:

  1. The machine’s only objective is to maximize the realization of human preferences
  2. The machine is initially uncertain about what those preferences are
  3. The ultimate source of information about human preferences is human behaviour

There are some mechanics that can be deployed to achieve such design. These include game theory, utilitarian ethics, and an understanding of human psychology. Machines must defer to humans regularly, ask permission, and their programming will explicitly allow for the machines to be wrong and therefore be open to being switched off.

Agree with Russell or disagree, he has provided a framework to which disparate parties can now refer, a common language and usable concepts accessible to those from all disciplines to progress the AI safety dialogue.

If you think that goals should be hard-coded, then you must point out why Russell’s warnings about fixed goals are mistaken. If you think that human preferences can always be predicted, then you must explain why centuries of social science research is flawed. And be aware that Russell preempts many of the inadequate slogan-like responses to these concerns.

I found an interesting passage late in the book where the argument is briefly extended from machines to political systems. We vote every few years on a government (expressing our preferences). Yet the government then acts unilaterally (according to its goals) until the next election. Russell is disparaging of this process whereby ‘one byte of information’ is contributed by each person every few years. One can infer that he may also disapprove of the algorithms of large corporate entities with perhaps 2 billion users acting autonomously on the basis of ‘one byte’ of agreement with blanket terms and conditions.

Truly ‘human compatible’ AI will ask us regularly what we want, and then provide that to us, checking to make sure it has it right. It will not dish up solutions to satisfy a ‘goal in the machine’ which may not align with current human interests.

What do we want to want?

The book makes me think that we need to be aware that machines will be capable of changing our preferences (we already experience this with advertising) and indeed machines may do so in order to more easily satisfy the ‘goals in the machine’ (think of online engagement and recommendation engines). It seems that we (thanks to machines) are now capable of shaping our environment (digital or otherwise) in such a way that we can shape the preferences of people. Ought this be allowed?

We must be aware of this risk. If you prefer A to B, and are made to prefer B, then how is this permitted? As Russell notes, would it ever make sense for someone to choose to switch from preferring A to preferring B, given that they currently prefer A?

This point actually runs very deep and a lot more philosophical thought needs to be deployed here. If we can build machines that can get us what we want, but we can also build machines that can change what we want, then we need to figure out an answer to the following deeply thought-provoking question, posed by Yuval Noah Harari at the end of his book ‘Sapiens’: ‘What do we want to want?’ There is no dismissive slogan answer to this problem.

What ought intelligence be for?

In the present context we are using ‘intelligence’ to refer to the operation of machines, but in a mid-2018 blog I posed the question what ought intelligence be used for? The point being that we are now debating how we ought to deploy AI, but what uses of other kinds of intelligence are permissible?

The process of developing and confronting an intelligence other than our own is cause for some self-reflexive thought. If there are certain features and uses of an artificial intelligence that we wouldn’t permit, then how are we justified in permitting similar goals and methods of humans? If Russell’s claims that we should want altruistic AI have any force, then why do we permit non-altruistic human behaviour?

Are humans ‘human compatible’?

I put down this book agreeing that we need to control AI (and indeed we can, according to Russell, with good engineering). But if intelligence is intelligence is intelligence then must we necessarily turn to humans, and constrain them in the same way so that humans don’t pursue ‘goals inside the human’ that are significantly at odds with ‘our’ preferences?

The key here is defining ‘our’. Whose preferences matter? There is a deep and complex history of moral and political philosophy addressing this question, and AI developers would do well to familiarise themselves with key aspects of it. As would corporations, as would policymakers. Intelligence has for too long been used poorly.

Russell notes that many AI practitioners strongly resist regulation and may feel threatened when non-technical influences encroach on ‘their’ domain. But the deep questions above, coupled with the risks inherent due to ‘goals in the machine’, require an informed and collaborative approach to beneficial AI development. Russell is an accomplished AI practitioner speaking on behalf of philosophers to AI scientists, but hopefully this book will speak to everyone.

Much work ahead to complete New Zealand’s pandemic preparedness

The Global Health Security Index which considers pandemic threats has just been published. Unfortunately NZ scores approximately half marks (54/100), coming in 35th in the world rankings – far behind Australia. This poor result suggests that the NZ Government needs to act promptly to upgrade the country’s defences against pandemic threats…

Blog hosted elsewhere: click here to read more

The promise of AI in healthcare

AI has likely applications across every domain of healthcare

The AI Forum of New Zealand has just published a report on AI and Health in the New Zealand context. The report trumpets some of the potential cost savings and efficiencies that AI will no doubt bring to the sector over the next few years. However, there are other interesting findings in the research report worth highlighting.

Enhanced teamwork and patient safety

AI that employs a combination of voice recognition, and natural language processing could help monitor and interpret healthcare multidisciplinary team interactions and help to ensure that all information relevant to a discussion has been raised or acknowledged.

This is important because we know that there are many failures in healthcare teamwork and communication, which often have their root cause in failures of information sharing and barriers to speaking up in a hierarchical team setting. This can and does impact patient safety. Including an AI assistant in future health teams could help overcome barriers to speaking up and sharing information.

Overcoming fallible human psychology

We also know that a range of psychological biases are at work when healthcare staff make decisions. These biases include a tendency to be influenced by recent experience (rather than statistical likelihood) and the tendency to attend to information that confirms the beliefs already held. Furthermore, doctors and other clinicians do not often get feedback on their diagnoses. This can lead to a tendency to increase confidence with experience without a parallel increase in accuracy.

One key promise of medical AI is that it can fill in the gaps in clinical thinking by providing a list of potential diagnoses or management plans with a statistical likelihood that each is correct. This kind of clinical decision support system could overcome one key failure in diagnosis, which is that the correct diagnosis is often not even considered.

However, in order to embrace these tools clinicians will also need to understand their own fallibility, the psychology of decision making and the very human cognitive processes that underpin these shortcomings. Intelligent digital systems undoubtedly have their own shortcomings, and their intelligence is best suited to particular kinds of problem. Human psychology suffers from complementary blind spots and it is the combination of artificial and biological intelligence that will advance healthcare.

Ensuring safe data

Another issue discussed in the AI Forum’s report is the need to make health data available in a form that AI can consume without worrying breaches of privacy. There is a significant challenge facing developers to find ways to de-identify free text (and other) data and present it in a form that is both machine readable and also unable to be re-identified whether intentionally or accidentally.

There is a risk that identifiable data could be used, for example, to prejudice insurance premiums on the basis of factors that people do not have control over. The process of de-identifying is proving to be very difficult. For example, even with names and addresses and such identifying features removed from clinical records (itself a challenging task given the many ways that such information can be recorded) there is still the possibility that merging datasets such as mobile phone location data held by, say Google, with clinical records that record the day and time of appointments, could intentionally or inadvertently identify individuals. Issues such as this need to be solved as we move forward with AI for healthcare.

We now need cost-utility analyses

The next step is to catalogue the AI tools that we presently have available and begin assessing the potential impact of these systems. Funders and provider institutions need to conduct cost-effectiveness analyses on these new tools and prioritise those that both increase effectiveness and clinical safety while also reducing time and saving costs. These investments might well take priority over investments in expensive new pharmaceuticals that marginally improve outcomes at great additional expense.

There is likely to be a lot of low hanging fruit in routine, repetitive teamwork and diagnostic tasks that AI is suited to assist with and where the public health dollar will go a long way, benefitting many patients not just a few.

AI offers many promising applications in the health setting and all those involved in the sector would be advised to read reports such as the AI Forum of NZ’s report on AI and Health in NZ, and think creatively about how AI might help solve the grand challenges in healthcare in the 21st Century.