The promise of AI in healthcare

AI has likely applications across every domain of healthcare

The AI Forum of New Zealand has just published a report on AI and Health in the New Zealand context. The report trumpets some of the potential cost savings and efficiencies that AI will no doubt bring to the sector over the next few years. However, there are other interesting findings in the research report worth highlighting.

Enhanced teamwork and patient safety

AI that employs a combination of voice recognition, and natural language processing could help monitor and interpret healthcare multidisciplinary team interactions and help to ensure that all information relevant to a discussion has been raised or acknowledged.

This is important because we know that there are many failures in healthcare teamwork and communication, which often have their root cause in failures of information sharing and barriers to speaking up in a hierarchical team setting. This can and does impact patient safety. Including an AI assistant in future health teams could help overcome barriers to speaking up and sharing information.

Overcoming fallible human psychology

We also know that a range of psychological biases are at work when healthcare staff make decisions. These biases include a tendency to be influenced by recent experience (rather than statistical likelihood) and the tendency to attend to information that confirms the beliefs already held. Furthermore, doctors and other clinicians do not often get feedback on their diagnoses. This can lead to a tendency to increase confidence with experience without a parallel increase in accuracy.

One key promise of medical AI is that it can fill in the gaps in clinical thinking by providing a list of potential diagnoses or management plans with a statistical likelihood that each is correct. This kind of clinical decision support system could overcome one key failure in diagnosis, which is that the correct diagnosis is often not even considered.

However, in order to embrace these tools clinicians will also need to understand their own fallibility, the psychology of decision making and the very human cognitive processes that underpin these shortcomings. Intelligent digital systems undoubtedly have their own shortcomings, and their intelligence is best suited to particular kinds of problem. Human psychology suffers from complementary blind spots and it is the combination of artificial and biological intelligence that will advance healthcare.

Ensuring safe data

Another issue discussed in the AI Forum’s report is the need to make health data available in a form that AI can consume without worrying breaches of privacy. There is a significant challenge facing developers to find ways to de-identify free text (and other) data and present it in a form that is both machine readable and also unable to be re-identified whether intentionally or accidentally.

There is a risk that identifiable data could be used, for example, to prejudice insurance premiums on the basis of factors that people do not have control over. The process of de-identifying is proving to be very difficult. For example, even with names and addresses and such identifying features removed from clinical records (itself a challenging task given the many ways that such information can be recorded) there is still the possibility that merging datasets such as mobile phone location data held by, say Google, with clinical records that record the day and time of appointments, could intentionally or inadvertently identify individuals. Issues such as this need to be solved as we move forward with AI for healthcare.

We now need cost-utility analyses

The next step is to catalogue the AI tools that we presently have available and begin assessing the potential impact of these systems. Funders and provider institutions need to conduct cost-effectiveness analyses on these new tools and prioritise those that both increase effectiveness and clinical safety while also reducing time and saving costs. These investments might well take priority over investments in expensive new pharmaceuticals that marginally improve outcomes at great additional expense.

There is likely to be a lot of low hanging fruit in routine, repetitive teamwork and diagnostic tasks that AI is suited to assist with and where the public health dollar will go a long way, benefitting many patients not just a few.

AI offers many promising applications in the health setting and all those involved in the sector would be advised to read reports such as the AI Forum of NZ’s report on AI and Health in NZ, and think creatively about how AI might help solve the grand challenges in healthcare in the 21st Century.

Pandemic Catastrophe: ‘Lifeboat’ is the wrong metaphor

Lifeboat

I recently published an academic paper about island refuges in extreme pandemics with my co-author Nick Wilson. The paper has become the focus of media attention, including articles by Newsweek, Fox News and IFLScience.

Unfortunately, the key argument of the paper has been misconstrued in several reports. The headlines included click bait such as, ‘If A Pandemic Hits, This Is Where Humanity Should Go To Survive’ and ‘Scientists rank safest spots to flee to if apocalyptic outbreak begins to wipe out humanity’.

Our Conclusion

The conclusion of our argument was almost the opposite. We argued that preparations could be made ahead of the fact, so that the designated island refuge(s) could be closed off almost immediately when signals indicate a catastrophic pandemic is emerging. There would be no fleeing to the ‘lifeboat’ allowed.

In fact, the metaphor of a ‘lifeboat’ is misleading, because people scramble for a lifeboat when the disaster strikes. Our argument (which I explain below for those who have not read our paper) is that the islands most likely to be able to reboot a flourishing technological human culture after the event, should be identified ahead of time, and plans enacted to prevent people arriving when the catastrophe strikes (i.e. through border closure measures).

Information Hazard

In the literature on catastrophic and existential risk mitigation, the concept of an ‘information hazard’ is the idea that some information when spread actually increases the risk of a catastrophic outcome. The ‘run to the island refuge’ approach is an information hazard. If people actually behaved like this, it would undermine the effectiveness of the refuge, and increase the probability of (in this case) human extinction.

Our argument is about preserving technological culture, ensuring the continuation of the ‘human project’ and the flourishing of future generations. It is not about saving people around the world at the time of the catastrophe.

The fact that any particular people might survive the event is incidental to the bigger picture that some people could survive and ensure a future full of human beings and full of technological culture.

Global Representation

We identified three promising options for such an island refuge to preserve humanity. And one might argue that it is not fair on the diverse cultures of the world if just Iceland or just Australia, for example, survive the catastrophe.

However, there is nothing preventing plans for a designated refuge to include representation from all the worlds people, perhaps a system of rotating work visas, so that the refuge can host, at any given time, a sample of the members of each of the world’s jurisdictions. Whoever happens to be there at the time of the catastrophe would then effectively get ‘locked in’ and would represent their culture as the world is re-booted.

Providing such visas and hosting these diverse people could be an undertaking that the designated refuge nation(s) take on, perhaps in lieu of providing development assistance to the rest of the world, so that the refuge nation is not unfairly burdened should the refuge never be needed (which is also a likely outcome).

Our paper is only the first step in a conversation about how the world might mitigate the risk from extreme pandemics, and we encourage other ideas so that we can little by little implement strategies that reduce the probability such unlikely but devastating events can destroy humanity.

The TLDR version of our paper (570 words):

The risk of human extinction is probably rising. This is because, although the background risk from such natural events as supervolcanic eruption, or asteroid strikes has remained consistent, the risk from technological catastrophes rises with our technological prowess (think nuclear, nano, bio, and AI technologies) which is by definition unprecedented in human history.

There are a number of reasons to work to preserve the future existence of humanity. These include the value of the vast number of future human lives, the worth of continuing key cultural, scientific and artistic projects that span long periods of time, and the cost-effectiveness of mitigating disaster.

We assume that the continuation of large and advanced civilizations is more valuable than the persistence of only a handful of subsistence societies.

A pandemic or perhaps more probably multiple pandemics occurring together poses a risk to the existence of humanity. The WHO hypothesizes about ‘Disease X’, a disease about which we yet know nothing, and which could emerge from increasingly accessible biotechnological manipulations. Such an event(s) would fall under Nick Bostrom’s concept of ‘moderately easy bio doom’, where a small team of researchers in a small laboratory, working for a year or so, might engineer a catastrophic biological threat. This would only need to occur once to threaten the entire human species.

The probability of such an event is unknown but is likely to be non-zero. A survey of experts in 2008 concluded on the basis of expert opinion that an existential threat level pandemic has a 2% chance of occurring by the year 2100. Two percent times a 10 billion population, means we are looking at an expected human life loss of 200 million people. Reducing the probability of such an event even by half a percent would be hugely worthwhile, particularly if it can be done with existing resources, and within existing governance structures.

One way to mitigate the risk of a pandemic killing all humans, is to designate some location as a refuge (just as with nuclear fall-out shelters in the cold war), which is quarantined (i.e. no one goes in or out) as soon as the triggering event is observed. Land borders are easily crossed by those infected with a disease, but islands have escaped pandemics in the past.

Key features required by such an island refuge are: a sufficient population, skills, and resources to preserve technological culture and rebuild society, and self-sufficiency and resilience to ride out the crisis.

We set about trying to identify the most likely locations (at the level of island nations) that satisfy these criteria. To do this we formalized a 9-point scoring tool covering: population size, number of visitors, distance from nearest landmass, risk of natural hazards, infrastructure and resources (using GDP per capita as a proxy), energy and food self-sufficiency, political stability and social capital.

When scored on our 0-1 metric, Australia, then New Zealand, then Iceland appear to be the most promising island refuges.

A lot of work yet remains to be done. Perhaps there are better interventions to protect the world from an existential level pandemic? Perhaps some other factor(s) is/are critically important to the success of an island refuge that we have not yet included (strong military, the presence of universities or server farms for the world’s knowledge, or some other factor)? How long could the optimal islands hold out? Is there some threshold of population that is essential to ensure that key skills such as nuclear physics or neurosurgery are preserved? We now encourage discussion and debate around these (and many other) issues.

%d bloggers like this: