Ideas Blog

A response to the AI Forum NZ’s ‘Shaping a Future New Zealand’ report

puppet_original_9439

Last week the AI forum New Zealand released its report ‘Artificial Intelligence: Shaping a future New Zealand’. I wish to commend the authors for an excellent piece of horizon scanning, which lays the foundation for a much-needed ongoing discussion about AI and New Zealand, because, like the Wild West there is much as yet unknown regarding AI. Microsoft was at pains to point this out in their ‘The Future Computed’ report published earlier this year. In my reply I comment on some of the content of the AI Forum NZ’s report and also try to progress the discussion by highlighting areas that warrant further analysis. Like all futurism we can find the good the bad and the ugly within the report. Click here for a PDF of my full comments below.

The Good

The report has done a thorough job of highlighting many of the opportunities and challenges that face us all in the coming years. It is a necessary and very readable roadmap for how we might approach the issue of AI and New Zealand society. The fact the report is so accessible will no doubt be a catalyst to meaningful debate.

It was good to see insightful comments from Barrie Sheers (Managing Director, Microsoft NZ) at the outset, which set the tone for what at times was (necessarily) a whistle-stop tour of the web of issues AI poses. Barrie’s comments were nuanced and important, noting that those who design these technologies are not necessarily those who ought to decide how we use them. This is a key concept, which I will expand on below.

The report is generally upbeat about the potential of AI and gives us many interesting case studies. However, the ‘likely’, ‘many’, benefits of AI certainly do not give us carte blanche to pursue (or approve) any and all applications. We need a measured (though somewhat urgent) approach. Similarly, there is omission of some of the key threats that AI poses. For example, AI is suggested as a solution to problem gambling (p. 76), yet AI can also be used to track and persuade problem gamblers online, luring them back to gambling sites. For every potential benefit there is a flip side. AI is a tool for augmenting human ingenuity, and we must constantly be aware of the ways it could augment nefarious intentions.

It was good to see the report highlight the threat of autonomous weapons and the fact that New Zealand still has no clear position on this. We need to campaign forcefully against such weapons as we did with the issue of nuclear weapons. The reason for this is that in 2010 financial algorithms caused a $1 trillion dollar flash crash of the US stock market. Subsequent analysis has not satisfactorily revealed the reason for this anomaly. A ‘flash crash’ event involving autonomous weapons is not something we could simply trade out of a few minutes later.

The issue of risk and response lies at the heart of any thinking about the future of AI. One of the six key recommendation themes in the report centers on ‘Law, Ethics and Society’. There is a recommendation to institute an AI Ethics and Society Working Group. This is absolutely critical, and its terms of reference need to provide for a body that persists in its place for the foreseeable future. This working group needs to be tasked with establishing our values as a society, and these values need to shape the emergence of AI. Society as a whole needs to decide how we use AI tools and what constraints we place on development.

Ultimately, there probably ought to be a Committee for AI Monitoring, which distills evidence and research emerging locally and from around the world to quickly identify key milestones in AI development, and applications that pose a potential threat to the values of New Zealanders. This Committee probably ought to be independent of the Tech Industry, given Barrie Sheers comments above. Such a Committee would act as an ongoing AI fire alarm, a critical piece of infrastructure in the safe development of AI, as I discuss further below.

The Bad

Before I begin with the bad, I am at pains to emphasise that ‘Shaping a Future New Zealand’ is an excellent report, which covers a vast array of concepts and ideas, posing many important questions for debate. It is the quality of the report that draws me to respond and engage to further this important debate.

A key question this report poses is whether we will shape or be shaped by the emergence of AI. A key phrase that appears repeatedly in the document is ‘an ethical approach’. These two ideas together make me think that the order of material in the report is backwards in an important way. Re-reading Microsoft’s ‘The Future Computed’ report yesterday made me certain of this.

It may seem trivial, but in the AI Forum’s report, the section on ‘AI and the Economy’ precedes the section on ‘AI and Society’. This is to put the cart before the horse. Society gets to decide what we value economically, and also gets to decide what economic benefits we are willing to forgo in order to protect core values. We (society) get to shape the future, if we are willing and engaged. It is the societal and moral dimension of this issue that can determine what happens with AI and the economy. If we want to ‘shape’ rather than ‘be shaped’ then this is the message we need to be pushing. For this reason I think it is a mistake to give AI and the Economy precedence in the text.

A feature of the writing in this report is the abundance of definite constructions. These are constructions of the form ‘in the future X will be the case’. This is perhaps dangerous territory when we are predicting a dynamic, exponential system. Looking to Microsoft’s approach the phrase ‘no crystal ball’ stands out instead.

I’ll digress briefly to explain why this point is so critical. Rapidly developing systems change dramatically in ways that it is not easy for our psychology to grasp. Say you have a jar containing a bacterium (let the bacterium represent technical advances in AI, or the degree to which AI permeates every aspect of our world, or the number of malicious uses of AI, or some such thing). If the bacteria doubles in number every minute, and fills the jar after an hour, then by the time the jar is a quarter full (you’re really starting to notice it now, and perhaps are predicting what might happen in the future) you only have 2 minutes left to find another jar, and 2 minutes after that you’ll need 3 more jars. In the classic Hanson-Yudowsky debate about the pace of AI advance, what I’ve just illustrated represents the ‘AI-FOOM’ (rapid intelligence explosion) position. This is a live possibility. The future could still look very different from any or all of our models and predictions.

Furthermore, a disproportionate portion of the AI and the Economy section focuses on the issue of mass unemployment. This is the ‘robots will take our jobs’ debate. The argument here is protracted, far more detailed than any other argument in the document, and the conclusion is very strong. I think this is a mistake. Straining through models and analyses of spurious accuracy to reach an unambiguous conclusion that ‘AI will not lead to mass unemployment’ appears to be predetermined. The length of the reasoning (certainly compared to all other sections) conveys the illusion of certainty.

But we’re talking here about a tremendous number of unknowns. Including very many of Donald Rumsfeld’s infamous ‘unknown unknowns’, the things we don’t even know we don’t know yet. The modeling projects 20 years through this indeterminate melee and it is hard to accept such a definite conclusion (I know as much from looking at what past labour market models have predicted and what actually transpired). Prediction is hard, especially about the future. This is why trader, risk analyst and statistician Nassim Taleb encourages us to anticipate anything. The history of the world is shaped by Black Swans. These are unpredictable events that we rationalize after the fact, but which change everything. The only response to such uncertainty is to build resilience.

I’m not saying that there will be mass unemployment, I’m saying that trying to prove one way or the other is a risky approach. What I am saying is that the conclusion is misplaced, as risk analysts we ought not burn bridges like this. Let’s call a spade a spade. To me the argument in ‘Shaping a Future New Zelaand’ appears to be a rhetorical device put forward by those who don’t want us to contemplate massive labour force disruption. If people are afraid for their jobs, they are less likely to authorize AI (and given the moral precedence of society over economy authorize is the correct term).

But to take this argument even further, what is the reason that we fear mass unemployment? It’s not because of mass unemployment per se, it’s because unemployment can deny people meaningful activity in their life, and it can also cause economic pain. However, mass unemployment is only one way to cause these things. We should also be considering, for example, other ways that AI might deny us meaningful activity (with mass automation of decisions) or cause economic harm (through financial market collapse following an algorithmic mishap – financial error or financial terror) and so on. Mass unemployment is a side-show to the real discussion around value, meaning and risk that we need to be having.

By concluding that there is no risk, nothing to worry about, we risk being caught off-guard. A safer conclusion, and one that provides in fact much more security for everyone, is one that is reached without analysis. Maybe AI leads to mass unemployment, maybe it doesn’t. The problem is that if we don’t plan for what to do in the event, then we have built a fragile system (to use Taleb’s term).

By accepting at least the possibility of mass unemployment, we can invest in resilience measures, pre-empt any crisis, and plan to cope. We put that plan into action if and when the triggering events transpire. What we need is an insurance policy, not to hide our head in the sand. What we need is a fire alarm. That would be the way to allay fears. That would be how to ensure the system is antifragile.

Given the pace of AI innovation and surprising advances, we don’t know how New Zealand will be affected by AI, but we can control what we are comfortable permitting. This is why Society must precede Economy.

In fact this has been a weakness of much contemporary political reasoning. Problems are tackled on an ad hoc basis, to determine how they might economically benefit us. What is lacking is a set of overarching values that we hold as a society and that we apply to each problem to determine how we must respond (whether or not it accords with our best economic interests). Max Harris tackles this issue in his recent book ‘The New Zealand Project’.

So I return to the phrase, ‘an ethical approach’ which is the main theme of this report that needs unpacking. We need to decide as a society what our ethical approach is. We need a framework, which will determine whether each instance of AI is good, bad or ugly.

I’ll turn to a concrete example. If I’m being critical (which I am in the interests of really pushing this debate deeper) there are some important omissions from the report.

Notably, very little mention is made of the advertising and communications industry. This is surprising given recent developments with fake news, the Cambridge Analytica saga and the associated Facebook debacle. All of which are merely the tip of the iceberg of an industry that has already taken advantage of the fact that the public is generally ill-informed about the uses and possibilities of AI. Marketing is turning into manipulation. Attempts are being made to manipulate citizens to behave in ways that exploit them.

It’s debatable to what degree these techniques have succeeded to date, but remember that bacteria has only been growing in the jar for 58 minutes so far, so the tools are rudimentary (to stick with our analogy, the tools employed by Cambridge Analytica were only one quarter effective, in 4 minutes we face tools with eight times that effect! – look at AlphaGo Zero and think about how the relatively rule-based human social network game might be learned, and what the intentions might be of those who control that technology)

The point is that we are facing a situation where we humans, who possess a psychology riddled with unconscious heuristics and biases, and are simply not rational, no matter how much we rationalize our behavior, are faced with AI systems that on the one hand are dreadfully incompetent compared to ourselves, and yet on the other hand have immense knowledge of us and our tendencies. This latter feature means there is potential for a power imbalance in these interactions and we are the victims. This is the fundamental premise of the industry of nudging. Which when deployed with less than altruistic goals we can plainly call manipulation.

The AI Forum report contains very little on manipulation and disinformation by AI, or the potential horror scenarios of AI impersonating (convincingly) powerful human individuals. We are going to need to solve the problem of trust and authenticity very quickly, and more importantly, to start to condemn attempts to impersonate and mislead.

We need more discussion about the degree to which we ought to require AI systems with which we interact to disclose their goals to us. Is this system’s goal to make me buy a product? To stop me changing banks? To make me vote for Trump? To maximize the amount I spend online gambling? Perhaps we need regulation that makes AI developers ensure that AIs must declare that they are AIs.

The reason for this is because humans have evolved a very effective suite of defenses against being swindled by humans, but we are unfamiliar with the emerging techniques of AI. Unlike when I deal with a human, I’m unfamiliar with the knowledge and techniques of my potential manipulator. Private interests are going to flock to manipulation tools that allow them to further their interests.

There is one line in the report addressing this issue of manipulation by AI, but it is an important line. The Institute of Electrical and Electronics Engineers is in the process of drafting an engineering standard about ethical nudging. This certainly gets to the heart of this issue, but it remains to be seen what that standard is, what kinds of systems it covers, and who will adopt it. We could have done with such a standard before Cambridge Analytica, but we still need ways to make businesses adhere to it. New Zealand needs to be having values-based discussions about this kind of thing, and we need to be monitoring overseas developments so that we have a say, and do not get dragged along by someone else’s standards.

The Ugly

The report does a good job of laying out the strategies other nations are employing to maximize the probability of good AI outcomes. These case studies certainly make New Zealand look late to the party. However, there is no discussion of what is ultimately needed, which is a global body. We need an internationally coordinated strategy of risk management. This will be essential if nations do not want to be at the receiving end of AI use that they do not condone themselves. This is a coordination problem. We need to approach this from a values and rights perspective, and New Zealand has some successful history of lobbying the globe on issues like this.

The report highlights some potential threats to society, such as bias, transparency, and accountability issues. However, there are many further risks such as those that exploit surveillance capitalism, or threaten autonomy. Given that there are potential looming threats from AI, to individuals open to exploitation, to democratic elections from attempts at societal manipulation, to personal safety from autonomous agents, and so on, what we need is more than just a working group. It is very apparent that we need an AI fire alarm.

Even if we manage to approach AI development ‘in an ethical way’ (there’s that phrase again) and ensure that no one should design AI that seeks to exploit, manipulate, harm or create chaos, we will need to be able to spot such malicious, and quite probably unexpected acts before they cause damage. Furthermore, many private entities are more concerned with whether their behavior is legal rather than ethical. The difference is substantial. This is why we need a Committee for Monitoring AI. I’ll explain.

Fire is a useful technology with many societal and economic benefits, but it can go wrong. Humans have been worrying about these side-effects of technology since the first cooking fire got out of control and burned down the forest.

Eliezer Yudowsky has written a powerful piece about warning systems and their relevance to AI. Basically he notes that fire alarms don’t tell you when there is a fire (this is because most times they ring there is no fire). But conversely the sight of smoke doesn’t make you leap into action. This is especially true if you are a bystander in a crowd (perhaps it’s just someone burning the toast? Surely someone else will act, and so on). What fire alarms do is they give you permission to act. If the alarm sounds, it’s OK to leave the building. It’s OK to get the extinguisher. You’re not going to look silly. The proposed AI Ethics and Society working group, and my suggested Committee for Monitoring AI ought to act as fire alarms.

Perhaps a system of risk levels is needed that account for the scale of the particular AI risk, its imminence, and the potential impact; a colour-coded system to issue warnings. Importantly, this needs to work at a global not just local level due to the threat from outside and the lack of national boundaries for many AI applications. Our global interactions around AI need to extend beyond learning from foreign organisations and sharing gizmos.

Overall, we need to shift the focus around AI innovation from one of rapid development to market, to one concerned with risk and reliability. AI as a technology has more in common with anaesthesia or aviation than with sports shoes or efficient light bulbs. Like aviation, we need to ensure high-reliability AI infrastructure when AI is at the helm of logistics and food supply, power grids, self-driving cars and so on. We need redundancy, and I’m not confident this will be implemented especially given the single point of failure systems we still have commanding our telecommunications network in New Zealand. A human factors, safety systems engineering approach is needed, and this will require large changes to computer science and innovation training.

Conclusions

The AI Forum New Zealand is to be commended for a detailed yet accessible report on the state of play of AI in New Zealand. These are exciting times. Overall the urgency with which this report insists we must act is absolutely correct.

The Recommendations section begins, ‘Overall, the AI Forum’s aim is for New Zealand to foster an environment where AI delivers inclusive benefits for the entire country’. This must be the case. We just need to work hard to make it happen. The best way to ensure inclusive benefits is to settle on a value framework, which will enable us to unpack the elusive ‘ethical approach’. By running each possibility through our values we can decide quite simply whether to encourage or condemn the application.

Like any tool, AI can be used for good or for bad, and no doubt most applications will simply be ugly. The report claims that some of the important potential harms, for example criminal manipulation of society, are as yet ‘unquantified’. Well, it is not only criminals that seek to manipulate society, and to be honest, I’m not one for waiting around until harmful activity is quantified.

We need to decide what is OK and what is not, anticipating what might be coming. As the report indicates, this will require ethical and legal thinking, but also sociological, philosophical, and psychological. I would argue that a substantial portion of the Government’s Strategic Science Investment Fund be dedicated to facilitating these critical allied discussions and outputs.

Most of all we need to design for democracy and build an antifragile New Zealand. As a Society we must indeed work to shape the future. What values are we willing to fight for and what are we willing to sell-out?

Can Siri help you quit smoking?

iphone-hey-siri-670x335

So you want to quit smoking. But you want to do it right, with expert advice and evidence-based information. Should you ask Siri?

This week my co-author Nick Wilson and I published results of a pilot study reporting how effective personal digital assistants are at providing information or advice to help you quit smoking.

As far as we are aware our study is the first study looking at whether Siri or Google Assistant can help you quit.

The internet is widely used for obtaining health-related information and advice. For example, in the United Kingdom, 41% of internet users report going online to find information for health-related issues, with about half of these (22% of all users) having done so in the previous week.

We compared voice-activated internet searches by smartphone (two digital assistants) with laptop ones for information and advice related to smoking cessation.

We asked Siri and Google Assistant three sets of questions. We entered the same questions into Google as an internet search on laptops.

The first set of questions were adapted from the ‘frequently asked questions’ on the UK National Health Service (NHS) smokefree website.

The next set of questions were related to short videos on smoking-related disease produced by the Centers for Disease Control and Prevention (CDC) in the USA.

The final set of questions we devised to test responses to a range of features such as, finding smoking-related pictures, diagrams, instructional videos; and navigating to the nearest service/retailer for quitting-related products.

We graded the quality of the information and advice using a three tier system (A,B,C) where A represented health agencies which had significant medical expertise, B was for sites with some expertise (e.g. Wikipedia) and C was for news items, or magazine style content.

Google laptop internet searches gave the best quality smoking cessation advice 83% of the time, with Google Assistant on 76% and Siri 28% (equal firsts were possible).

The best search results by any device used expert (grade ‘A’) sources 59% of the time. Using all three methods failed to find relevant information 8% of the time, with Siri failing 53% of the time.

We found that Siri was poor when videos were requested according to the content the might contain, all three tools sometimes returned magazine or blog content instead of professional health advice, and we found that all tools had trouble when gay and lesbian-specific information was requested.

A weakness of our small pilot study was that we only considered the first result returned in each search.

Overall, while expert content was returned over half the time, there is clearly room for improvement in how these software systems deliver smoking cessation advice. We would encourage software firms to work with professional health organisations on ways to enhance the quality of smoking cessation advice returned.

See Adapt Research Ltd’s related blog: ‘To vape or not to vape… is not the right question

Health inequalities in NZ

inequality

Everyone knows that socio-economic inequalities in health exist – in recent times. But one thing we do not know is whether they have always been there. Adapt Research Ltd contributed to a just published study that looks at two historical datasets – with one of these suggesting life span differences by occupational class as measured 100 years ago.

The study found strong differences in life expectancy by occupational class among men enlisted to fight in the First World War (but not actually getting to the frontline). Whilst not definitive evidence (it is hard to get perfect evidence from 100 years ago!), it does suggest that socio-economic inequalities in mortality have existed for at least 100 years in NZ.

In this blog we also take the opportunity to discuss what might be done to address the current inequality problem in this country, this is especially relevant given the Tax Review currently underway… Click here to read the full blog (hosted externally).

Reducing Harm from Falls: What have we learned in the last 12 months?

tia-chi

Falls are a major cause of injury and reduced quality of life for older people.

Estimates suggest that more than a quarter of people over 65 years of age fall in any given year. Broken hips and head injuries are among the most serious complications of a fall.

This is why the New Zealand Health Quality and Safety Commission has spent several years encouraging the health sector in New Zealand to implement programmes that reduce harm from falls.

A set of evidence-based guidance, the ‘Ten Topics’ is available on the Commission’s website.

However, every day more than 4 new research papers about falls are published. The Safety Lit database contains over 1500 items for ‘Falls’ in 2017 alone.

Every week there are new systematic reviews, meta-analyses, guidelines or health technology assessments.

So what does all this new evidence tell us?

We know that in New Zealand aged residential care facilities 13% of patients have a fall in the previous 30 days.

We also already know that bisphosphonates are an important medication to fight osteoporosis and prevent fragility fractures in older adults. But did you know that they are cost-effective at a fracture risk of just 1%? And yet in New Zealand there is scope to increase the rate of bisphosphonate prescribing for patients suffering fragility fractures.

Strength and balance exercise programmes can help prevent falls and new evidence suggests that Tai Chi is also effective.

As for medications, data previously appeared to show that antihypertensives increase the risk for falls. But some large new reviews indicate this may mostly be due to falls in the first 24 hours after a dose adjustment, or if the patient is taking diuretics.

Selective beta blockers may not increase the risk of falls, and treating hypertension to guideline levels is likely to be safe.

We are now pretty sure that prescribing vitamin D to otherwise healthy older people does not prevent falls or fractures.

However, sleep disturbances can increase the risk of falls.

Finally, home safety assessment and modification programmes and in-home strength and balance exercise programmes appear cost-effective in the New Zealand context.

The above is just a taste of the new evidence available to help in reducing harm from falls, and the Commission’s website indicates that their ‘Recommended Evidence-based resources’ webpage will be updated annually.

So don’t just take my word for it, examine the new evidence for yourself, and we can all look forward to the next comprehensive update.

For now, the Ten Topics are an excellent resource for anyone who is in the business of reducing harm from falls, and reducing harm from falls is everyone’s business!

 

AI Update: Have you seen my Putin Impersonation? It’s a blast.

URKDagg

Two recent articles in the media reminded me of a concern about AI.

Firstly, the BBC asks us whether we would care if a feature article was, ‘written by a robot’? The implication is clear, it will soon be the norm for digital content to be created by intelligent systems.

Secondly, and more menacingly, another BBC report suggests that hacked video and voice synthesis tools will soon be producing lifelike quality. The actual report cited by the BBC story can be found here: The Malicious Use of AI Report.

Impersonating humans

It may not be long before digital tools can convincingly (read indistinguishably) produce fake video and speech, thereby impersonating specific human beings. This content could be augmented with mannerisms and linguistic style harvested from previous online posts, speeches, comments or video produced by the target.

What emerges is a simulacrum. Reality recedes and we can no longer tell what is real and what is not.

The problem with all this is that very soon the capacity of someone, or some system with advanced voice, video, and linguistic style tools at their disposal will be able to convincingly impersonate human beings. To all intents and purposes they will be able to hack reality.

Impersonation does not mean that the system doing the impersonating will need to interact with people or pass a Turing test. All that is required is that the content produced is convincing.

It is not completely in the realm of fantasy to picture ‘Vladimir Putin’ giving the order to use nuclear weapons, and this order is either acted upon, or retaliated against. Many other undesirable situations are possible, both mundane and terrifying.

A host of companies are already working very hard, and with much success, to create impersonations, of their own staff, for the purpose of interacting with customers.

Currently many states have laws forbidding the impersonation of a person, by another person, but do our laws adequately forbid the impersonation of a human being by a digital system? And even if it forbidden, how will we enforce this?

Trust: the next big human problem

Humans over time have struggled with some key societal problems, and found solutions. These include:

  • the problem of coordination and cooperation, solved by language and written symbols
  • the problem of exchange, solved by money
  • the problem of information dissemination, solved by the printing press
  • the problem of scale, solved by the industrial revolution

We now must solve The Problem of Trust and Authenticity.

This has been demonstrated by the debacle around Fake News, where Facebook and other digital media companies are scrambling to implement change.

We need to solve the issue of trust, at the levels of moral norms, digital systems, societal systems, and laws.

Technologies like blockchain are small steps in the right direction, but the problem persists and other solutions and constraints are needed.

We need to have a conversation as a society about what kind of future we want to live in, and what limits, laws and norms we want to impose on emerging technologies, particularly artificial intelligence, which by it’s very nature mimics us.

This conversation needs to begin now and it needs to involve the technology sector, a fully informed general public, and the government.

Click here to listen to a talk by Adapt Research Ltd’s Matt Boyd on AI and the media given at the NZ Philosophy Conference in 2017. 

From Russia with Love: Time for serious work on the benefits and risks of artificial intelligence

The following introduction is an excerpt from a just published blog at Sciblogs that examines the role of AI in society and democracy. Read the full article here

Transformative advances in artificial intelligence (AI) have generated much hype and a burst of dialogue in New Zealand. Past technological change has led to adaptation, but adaptation takes time and the pace with which AI is arriving appears to be accelerating. For example, recent news about the unfolding ‘Russia Investigation’ may be just a prelude to what is possible if AI tools hijack our social systems.

Technology offers us opportunities to do things we previously could not, but in doing so the use of technology also changes us, and it changes the systems and norms of society.

Read more…

Artificial Intelligence, Freedom and Democracy: Talk at NZ Association of Philosophers Conference Dec 4, 2017

“Artificial Intelligence and Free Will: A 2017 Christmas Carol”

(Talk presented by Matt Boyd at the NZAP Conference)

 

Technological innovation builds our minds, does it build society too?

 

screen-shot-2013-07-23-at-3-35-03-pm

As we build our world we build our minds: reboot

Six years ago, while I was writing my PhD on technology and human nature, I wrote a blog where I argued that:

  1. Context builds us – Our social and technological environments can hinder, but they can also drive psychological development.
  2. Technology drives human development – Physical and digital tools shape us and build our intelligence; small doses of technology cause transient changes and long-term exposure has lasting effects.
  3. Our minds depend on technology – Much of what is unique and modern about human minds depends on technology for its development.
  4. The technology we build, in turn builds us – Basically, as we invent stuff, we change the context of development for the next generation, and they grow up with different thinking and affordances.
  5. Hence, technological innovation causes and sustains psychological evolution.

This reasoning was mostly based on the interplay of relatively static technological tools and human minds. However, we now see a range of dynamic and powerful technological tools emerging. This has important implications for human nature and the nature of society.

Building our minds

The symbols, media, tools, and methods that we invent shape and extend our minds. This is our brain’s ancient trick and amplifies what we are capable of achieving. We frequently make use of external supports to extend our brain’s capabilities (think words, lists, numbers, the abacus, abstract diagrams, calculators, number lines, and a range of other tools).

On the other hand, without technology we are mentally crippled. Without our abacuses and iPhones we are like Alzheimer’s patients without their post-it notes and pill schedules. As well as appearing rapidly, a lot of recent human evolutionary psychological advances could disappear overnight should the technological context sustaining them change.

These changes can be slow or fast. Geological processes are usually slow and ancient, but earthquakes show us that they can be sudden and dramatic. In analogous ways, small changes to technology can change the cognition of a population slowly over time, but significant innovations, such as mathematical symbols or the internet, can have sudden unexpected effects. With new technologies come new implications. Cue artificial intelligence…

The reason we see these effects is because the brain is a highly malleable organ. Stimulated at the right stage of development it can be made to do, or not do, almost anything.

But the more we offload responsibility and dynamic cognitive processes to intelligences other than our own, the more we risk becoming automated masters of our own creations. Instead of technology augmenting our intelligence, we risk merely obeying algorithms. If thought is offloaded to digital supports and never re-internalized, the cognitive loop is broken and instead we divest cognition, and therefore power and control.

A mother’s diet has a critical effect on the future health of the unborn. In similar fashion technological diet in childhood shapes thought processes. Given our ability to build a range of different technological environments for our children, then it is likely that our innovation wittingly or unwittingly causes an array of emotional and psychological traits. Ever since technology was invented, as we build our world we have been building our minds.

Building society’s future

That was my conclusion in 2011, but there is an important extension of the argument to society and democracy, as noted in a Scientific American article this year.

Society and social structures are just as malleable as human minds. The technological environment of a society produces a set of affordances, and with affordances come possible actions, institutions and norms. The technological environment, coupled with our tendency to now offload dynamic processes (once the domain of thought) to digital systems, means that technological uptake is wittingly, or unwittingly building the nature of our society.

The old dogma was that technology does not determine people, because it is how we use technology that is important. However, in the new world of dynamic and autonomous technologies that dogma must be called into question.

The moral of the story is that we must reflect carefully on the possible consequences of introducing even benign seeming technologies, and uphold a principle of precaution and willingness to respond with rules and norms should things not turn out as we expected.

At Adapt Research Ltd we are very interested in the social, philosophical, psychological and ethical aspects of technology and innovation. Contact us here to continue the discussion.

PHARMAC’s ‘Factors for Consideration’, Justice, and Health Need

MedicalMalpracticeCasesAndTrial

When deciding what medications to publicly fund PHARMAC uses multiple decision criteria, one of which is ‘health need’. So how can we establish who needs what in healthcare?

Distributive Justice

One approach is to take the perspective of justice. What factors do we need to consider to ensure a just distribution of resources? John Rawls provides an answer to this question by inviting us to consider what kind of society we would want, but we must consider it from an original position, behind a ‘veil of ignorance’ where we do not know who we will be in this society, or what our circumstances will be like.

Rawls thinks we would come to two conclusions.

Firstly, we would want there to be rights. In the case of healthcare everyone would have a right to healthcare because no one knows from the original position whether they will be sick or healthy.

Secondly, the only inequality in healthcare that ought to be pursued is inequality that also raises the health of those who are worst off. An example of this might be colorectal cancer screening programs, which are shown to widen health inequalities, while making the worst off better off. Overall, the aim of resource distribution should be to maximize the health of those worst off. This is deduced logically, because from the original position we ask ourselves, ‘what if we were the worst off?’

Impact of Justice on Population Health

This means that logically a minimum level of health will emerge, this occurs because all health resources will be distributed in the first instance to those least well off, to raise their quality of life to the degree currently possible with existing treatments.

Resources will also be justly given to those better off, if the process raises the level of health of those least well off. For example, the colorectal screening program identified above, or perhaps other health resources that improve the health of those already well off so that they can better care for those less well off.

Once those least well off have been allocated benefits to raise them to the level of the next least well off, or once they have been allocated all existing reasonable treatments, then we move allocation to the next least well off, and so on.

What might PHARMAC do?

So, how ought PHARMAC to interpret ‘health need’ from this viewpoint on distributive justice? I raise five issues:

  1. PHARMAC currently considers ‘government health priorities’ – this is fair enough, provided these priorities are: (a) looming big expense items (e.g. due to demographics or epidemics), (b) aimed at addressing unjust health inequality, or (c) targeting those individuals who are living below some minimum standard of health (this is the maximizing the minimum approach favored by a Rawlsian concept of justice).
  2. PHARMAC currently considers the ‘availability and suitability of existing treatments’ – this is also fair enough. The concept of a minimum standard of health ought to be important here. From the original position, we would all want to ensure that those who are very unhealthy are supported towards health if possible, whereas we would be less concerned about increasing health of those already in reasonable, though not perfect health (their health need is lower). There are usually diminishing returns by continuing to spend on those already nearer to full health but more importantly this does not help those worst off.
  3. PHARMAC considers the ‘health need of the person’. This should be important but only in the context of the population. This is a critical qualifier. The person only has a health need if they are below the mean or minimum standard of health for the population. If they are not then they don’t have as much need, but others who are below the standard do have need.
  4. A further point when considering need, is that quality adjusted life years (QALYs), which are the unit of accounting used by PHARMAC to designate utility, are not sufficient measures of worthwhile life. An example illustrates the point. It might be very meaningful for a grandparent to stay alive until her great grandchild is born. This could be true even if this means living a year at low quality of life rather than 6 months at higher quality of life. The person may prefer the first situation even if it amounts to fewer QALYs. So again context is critical.
  5. In pursuing the logically derived minimum standard of health (deduced from an impartial original position and the health budget) then there are two important needs: (1) cure for people suffering ill health, up to the level of the next worst off, iteratively. And (2) prevention, to stop people from dropping below the minimum standard. The concept of prevention is important, and it allows for allocating resources to those who are more well off currently, because it maximizes health resources available downstream to help those least well off. Preventive need is determined by the probability and time course of dropping below the level of the least well off. Curative need is determined by the probability of success of the treatment (reasonable chances) and the magnitude of the gain (up to the point of the next least well off).

There’s a lot more to be said on distributive justice in health care as informed by a Rawlsian viewpoint. But these points are a good place to start discussion.

%d bloggers like this: