Surveillance Capitalism: Ought we permit behavioural data to exist?

Related image

I want to draw attention to an interview in the Guardian I just read about surveillance capitalism and the allegedly illegitimate conquest of personal data by big digital firms.

The subject of the interview is Harvard Professor Shoshana Zuboff who is author of a book to be released end of January 2019: The Age of Surveillance Capitalism

I’ve previously cited one of Prof Zuboff’s earlier papers in an article about ‘Rapid developments in artificial intelligence: how might the New Zealand government respond’.

I found the Guardian interview so compelling I have already ordered her new book.

The book promises to provide a robust intellectual framework for deconstructing the power of the giant tech firms on the basis of illegitimate conquest of ‘digital natives’ (a very clever metaphor).

This argument could be the forceful rejection of rampant data harvesting that many who oppose the unbridled power of big tech have been seeking.

Here are some key points from the article and interview:

Surveillance capitalism:

  • Works by providing free services and enables the providers of those services to gather a phenomenal amount of information about the behaviour of users, frequently without their explicit consent.
  • Claims without care for opposing views that human experience is a free raw material that the surveillance capitalist may translate into behavioural data.
  • Feeds such data into machine intelligence driven manufacturing processes and fabricates this into prediction products.
  • These prediction products are traded in a behavioural futures market.

Why is this a problem?

  • The initial appropriation of users’ behavioural data is arrogant. Data are viewed as a free resource, there for the taking.
  • The key digital technologies are opaque by design and sustain user ignorance.
  • The emergence of these tech giants occurred in a largely law-free context.
  • The combination of state surveillance and capitalist surveillance is separating citizens into two groups: the watchers and the watched.
  • This is important, because as Jamie Susskind notes in his excellent 2018 book Future Politics, the imbalance in political power has historically been mitigated by the strong being scrutinised publicly and the weak enjoying personal privacy. Upset that dynamic and power shifts.
  • Asymmetries of knowledge translate into asymmetries of power.
  • We may have some oversight of state surveillance, but we currently have almost no regulatory oversight of surveillance capitalism.

Finally, the key concept that leaped from this interview for me is the following:

  • “The idea of ‘data ownership’ is often championed as a solution. But what is the point of owning data that should not exist in the first place?”

Zuboff argues that what we have witnessed over the last two decades is a conquest by declaration. A unilateral decree that some entity may harvest and use a resource freely and without limit.

This is colonial imperialism at its most ruthless and it needs limits.

 

‘Healthy News’ labelling – a discussion at Ben Reid’s request

Screen Shot 2018-11-22 at 9.57.06 pm

In an AI Forum NZ newsletter yesterday Forum Executive Director Ben Reid said the following:

“Last week, I was part of the delegation at the annual Partnership on AI (PAI) annual all-partners meeting in San Francisco…

…One of the highlights for me was an open and frank speaker panel featuring Kiwi Facebook executive Vaughan Smith which discussed the emerging effects of AI on media and democracy, including the ongoing fallout from Cambridge Analytica and US election manipulation scandals.  It was instructive to understand the challenges that social media giants face attempting to automate moderation of literally billions of posts per day.  Building algorithmic systems which can keep up with the huge diversity of cultures and languages across the world and effectively automate ethical value based decisions on a global scale is a data science and AI challenge in itself! (Discuss…)”

Interesting point Ben, and since you invited us to ‘discuss’, I don’t mind if I do.

I’ve been doing some thinking on this lately, and for me the issue of information pollution is the #1 priority in the world today, because clean information underpins every decision we make, about every issue, and information pollution is destabilizing, harbors risk and threatens the institutions of democracy.

I can’t speak to the technical issues around monitoring billions of posts (and I prefer the word ‘monitoring’ to ‘moderating’). But I can speak to the theory behind the problems.

One idea I’d like to see pursued is the idea of a ‘Healthy News’ content labeling system. This would target the items with the largest number of views/likes/shares, so it would be top-down and not necessary for a post to be labeled until it’s risen to prominence (thereby reducing scale of the solution).

The labeling system needs to be:

  1. Extremely simple, yet articulate enough to quickly pass on the relevant information at a glance and further information at a hover
  2. Grounded in deep technical, proven theory about the dynamics of human information transmission

These dynamics (to greatly simplify the theory that I wrote my Masters and PhD on) boil down to:

  1. Features of the source of the information
  2. Features of the content of the information
  3. Features of frequency of the information

Human psychology has evolved and then developed to attend to these features. This suite of theory (and all it’s nuances) explains why we believe what we do and why certain information is passed on and other information is not.

Different pieces of information (posts, news items, conspiracy theories, fashions, formulae, etc) ‘succeed’ because they have the right combination of these three factors.

All three factors can be traced and quantified in various ways. For example:

* There are programs that allow you to find the original source of a Twitter post, or the first mention of an exact phrase. The source can be categorized as a major thought hub, or a leading expert, or an isolated individual with links to groups that incite violence.

* There are applications that can deduce the emotional tone of the content of a piece of information (and many other content features like the truth of facts through automated real-time fact checking).

* There are algorithms for tallying shares/likes/retweets/views/downloads to establish frequency distributions.

Using these basic features and a host of other metadata we can categorize information according to its source, content and frequency.

We can then use a system of simple colour coded icons to convey that information.

A red robot icon for example combined with a red angry face icon might mean the source of the item is likely a twitter bot, and the emotional tone of the content is aggressive or inciteful.

A green tick icon combined with an orange antenna icon might mean this item passes fact-checking software, but comes from an infrequently liked source.

These are just examples I’m dreaming up. What we need is a research programme that takes the vast literature of cultural evolution theory and human information transmission dynamics theory, and cognitive bias theory and deduces what variables that drive the theory can we extract from network data and what these can tell us about information and its dynamics.

We then need to implement this labelling, and provide education around information transmission dynamics in schools, on websites, everywhere, until people understand how and why this stuff flows the way it does. It will take time, but just as we learned to interpret all the vast array of icons we come across daily, we can grasp this too, and the implications particular icon sets have.

Additionally, hovering over the icons provides additional information about the information transmission dynamics and why this item has spread as it has.

This is a quick first pass at this ‘Healthy News’ labelling idea. But joint work between technical digital media experts and those who understand the science of cultural evolution and cognitive biases could start to deduce how and why this all happens and how we can warn users about dangerous, false, hysterical, manipulative, and malicious content.

I found the following article particularly interesting: https://medium.com/hci-design-at-uw/information-wars-a-window-into-the-alternative-media-ecosystem-a1347f32fd8f

In it they generate this figure from network information harvested:

Screen Shot 2018-11-22 at 9.57.06 pm.png

The caption reads: “we generated a graph where nodes were Internet domains (extracted from URL links in the tweets). In this graph, nodes are sized by the overall number of tweets that linked to that domain and an edge exists between two nodes if the same Twitter account posted one tweet citing one domain and another tweet citing the other. After some trimming (removing domains such as social media sites and URL shorteners that are connected to everything), we ended up with the graph you see in Figure 1. We then used the graph to explore the media ecosystem through which the production of alternative narratives takes place.”

It is this kind of work and the software that drives it, which needs to be combined in the tool for ‘Healthy News’ labeling, ideally using a watermarking system.

The next step will be to fight off those who attempt to manipulate the new system, but that’s another story…

End of discussion.

Keeping our eye on the laser phish: Information pollution, risk, and global priorities

Untitled

  • This post is on what I consider to be the most pressing problem in the world today.
  • I lay out the theory underpinning information pollution, the significance and trajectory of the problem, and propose solutions (see bullets at the end). 
  • I encourage you to persist in reading this post, so that we can all continue this important conversation. 

Introduction

Technological innovation and growth of a certain kind are good, but I want to explain why risk mitigation should be more of a priority for the world in 2018. Without appropriate risk mitigation, we could upend the cultural, economic and moral progress we have enjoyed over the last half-century and miss out on future benefits.

One particular risk looms large, we must urgently address the threat of information pollution and an ‘infopocalypse’. Cleaning up the information environment will require substantial resources just like mitigation of climate change. The threat of an information catastrophe is more pressing than climate change and has the potential to subvert our response to climate change (and other catastrophic threats).

In a previous post, I responded to the AI Forum NZ’s 2018 research report (see ‘The Good the Bad and the Ugly’). I mentioned Eliezer Yudkowsky’s notion of an AI fire alarm. Yudkowsky was writing about artificial general intelligence, however, it’s now apparent that even with our present rudimentary digital technologies the risks are upon us. ‘A reality-distorting information apocalypse is not only plausible, but close at hand’ (Warzel 2018). The fire alarm is already ringing…

Technology is generally good

Technology has been, on average, very good for humanity. There is almost no doubt that people alive today have lives better than they otherwise would have because of technology. With few exceptions, perhaps including mustard gas, spam and phishing scams, arguably nuclear weapons, and other similar examples, technology has improved our lives.

We live longer healthier lives, are able to communicate with distant friends more easily, and travel to places or consume art we otherwise could not have, all because of technology.

Technological advance has a very good track record, and ought to be encouraged. Economic growth has in part driven this technological progress, and economic growth facilitates improvements in wellbeing by proxy, through technology.

Again, there are important exceptions, for example where there is growth of harmful industries that cause damage through externalities such as pollution, or through products that make lives worse, such as tobacco or certain uses of thalidomide for example.

The Twentieth Century however, with its rapid growth, technological advance, relative peace, and moral progress was probably the greatest period of advance in human wellbeing the world has experienced.

Responsible, sustainable growth is good

The key is to develop technology, whilst avoiding technologies that make lives worse, and to grow while avoiding threats to sustainability and harm to individuals.

Ideally the system should be stable, because the impacts of technology and growth compound and accumulate. If instability causes interruption to the processes, then future value is forgone, and the area under the future wellbeing curve is less than it otherwise would have been.

Economist Tyler Cowen explains at length in his book Stubborn Attachments. Just as opening a superannuation account too late in life can forgo a substantial proportion of potential future wealth, delayed technological development and growth can forgo substantial wellbeing improvements for future people.

Imagine if the Dark Ages had lasted an extra 50 years, we would presently be without the internet, mobile phones, coronary angiography and affordable air travel.

To reiterate, stability of the system underpins the magnitude of future benefit. There are however a number of threats to the stability of the system. These include existential threats (which would eliminate the system) and catastrophic risks (which would set the system back, and so irrevocably forgo future value creation).

Risk mitigation is essential

The existential threats include (but are not limited to): nuclear war (if more than a few hundred warheads are detonated), asteroid strikes, runaway climate change (the hothouse earth scenario), systematic extermination by autonomous weapons run amok, an engineered bioweapon multistrain pandemic, geoengineering experiment gone wrong, and assorted other significant threats.

The merely catastrophic risks include: climate change short of hothouse earth, war or terrorism short of a few hundred nuclear warhead detonations, massive civil unrest, pandemic influenza, system collapse due to digital terror or malfunction, and so on.

There is general consensus that the threat of catastrophic risk is growing, largely because of technological advance (greenhouse gases, CRISPR, power to size warhead improvements, dependence on just in time logistics…). Even a 0.1% risk per year, across ten catastrophic threats, makes one of them almost inevitable this century. We need to make sure the system we are growing is not only robust against risk, but is antifragile, and grows to strengthen in response to less severe perturbations.

Currently it does not.

Although we want to direct many of our resources toward technological growth and development, we also need to invest a substantial portion in ensuring that we do not suffer major setbacks as a result of these foreseeable risks.

We need equal measures of excited innovation and cautious pragmatic patience. We need to get things right, because future value (and even the existence of a future) depends on it.

We must rationally focus and prioritize

There are a range of threats and risks to individuals and society. Large threats and risks can emerge at different times and grow at different rates. Our response to threats and risks needs to be prioritized by the imminence of the threat, and the magnitude of its effects.

It is a constant battle to ensure adequate housing, welfare, healthcare, and education. But these problems, though somewhat important (pressing and a little impactful), and deserving of a decent amount of focus, are relatively trivial compared with the large risks to society and wellbeing.

Climate change is moderately imminent (significant temperature rises over the next decades) and moderately impactful (it will cause severe disruption and loss of life, but it is unlikely to wipe us out). A major asteroid strike is not imminent (assuming we are tracking most of the massive near earth objects), but could be hugely impactful (causing human extinction).

The Infopocalypse

I argue here that the risks associated with emerging information technologies are seriously imminent, and moderately impactful. This means that we ought to deal with them as a higher priority and with at least as much effort as our (woefully inadequate) efforts to mitigate climate change.

To be clear, climate change absolutely must be dealt with in order to maximize future value, and the latest IPCC report is terrifying. If we do not address it with sufficiently radical measures then the ensuing drought, extreme weather, sea level rises, famine, migration, civil unrest, disease, an so on, will undermine the rate of technological development and growth, and we will forgo future value as a result. But the same argument applies to the issue of information pollution. First I will explain some background.

Untitled1

Human information sharing and cognitive bias

Humanity has shown great progress and innovation in storing and packaging information. Since the first cave artist scratched an image on the wall of a cave, we have continued to develop communication and information technologies with greater and greater power. Written symbols solved the problem of accounting in complex agricultural communities, the printing press enabled the dissemination of information; radio, television, the internet, and mobile phones have all provided useful and life enhancing tools.

Humans are a cultural species. This means that we share information and learn things from each other. We also evolve beneficial institutions. Our beliefs, habits and formal routines are selected and honed because they are successful. But the quirks of evolution mean that it is not only ideas and institutions that are good for humanity that arise. We have a tendency for SNAFUs.

We employ a range of different strategies for obtaining relevant and useful information. We can learn information ourselves through a trial and error process, or we can learn it from other people.

Generally, information passed from one generation to the next, parent to child (vertical transmission), is likely to be adaptive information that is useful for navigating the problems the world poses. This is because natural selection has instilled in parents a psychological interest in preparing their children to survive and these same parents, holders of the information, are indeed alive.

Information that we glean from other sources such as our contemporaries (horizontal transmission) does not necessarily track relevant or real problems in the environment, nor necessarily provide us with useful ways to solve these problems. Think of the used car salesperson explaining to you that you really do need that all-leather interior. Think of Trump telling welfare beneficiaries that they’ll be better off without Medicare.

Furthermore, we cannot attend to all the information all the time, and we cannot verify all the information all the time. So we use evolved short cuts, useful heuristics that have obtained for us, throughout history and over evolutionary time, the most useful information there is. Such simple psychological rules as ‘copy talented and prestigious people’, or ‘do what everyone else is doing’, have generally served us well.

Until now…

There are many other nuances to this system of information transmission, such as the role of ‘oblique transmission’ e.g. from teachers, the role of group selection for fitness rather than individual selection, the role of the many other cognitive biases besides the prestige biased information copying and frequency-dependent copying just mentioned. And there is also the appeal of the content of the information itself, does it seem plausible, does it fit with what is already believed, have a highly emotive aspect, or is simple to remember?

The key point is that the large-scale dynamics of information transmission depend on these micro processes of content, source, and frequency assessment (among other processes) at the level of the individual.

All three of these key features can easily be manipulated, at scale, and with personalization, by existing and emerging information technologies.

Our usually well-functioning cognitive short-cuts can be hacked. The advertising and propaganda industries have realized this for a long time, but until now their methods were crude and blunt.

The necessity of true (environmentally tracking) information

An important feature of information transmission is that obtaining information imposes a cost on the individual. This cost can be significant due to the attention required, time and effort spent on trial and error, research, and so forth.

It is much cheaper to harvest information from others rather than obtain it yourself (think content producers vs content consumers). Individuals who merely harvest free information without aiding the production and verification of information are referred to in cultural and information evolution models as ‘freeriders’.

Freeriders do very well when environments are stable, and the information in the population tracks that environment, meaning that the information floating around is useful for succeeding in the environment.

However, when environments change, then strategies need to change. Existing information biases and learning strategies, favoured by evolution because, on average, they obtain good quality information, may no longer track the relevant features of the environment. These existing cognitive tools may no longer get us, on average, good information.

Continuing to use existing methods to obtain or verify information when the game has changed can lead individuals and groups to poor outcomes.

We are seeing this in the world today.

The environment for humanity has been changing rapidly and we now inhabit a world of social media platform communication, connectivity, and techniques for content production, which we are not used to as a species. Our cognitive biases, which guide us to trust particular kinds of information are not always well suited to this new environment, and our education systems are not imbuing our children with the right tools for understanding this novel system.

As such, the information we share is no longer tracking the problem space that it is meant to help us solve.

This is particularly problematic where ‘consume only (and do not verify)’ freeriders are rife, because then those that create content have disproportionate power. Those who create content with malice (defectors) have the upper hand.

The greater the gap between the content in the messages and the survival and wellbeing needs of the content consumers, the greater the risk of large scale harm and suffering across time.

If we don’t have true information, we die.

Maybe not today, maybe not tomorrow, but probabilistically and eventually.

Because fundamentally that is what brains are for, they are for tracking true features of the environment and responding to them in adaptive ways. The whole setup collapses if the input information is systematically false or deceptive and our evolved cognitive biases persist in believing it. The potential problem is immense. (For a fuller discussion see Chapter 2 of my Masters thesis here).

How information technology is a threat: laser phishing and reality apathy

Information has appeal due to its source, frequency or content. So how can current and emerging technological advances concoct a recipe for disaster?

We’ve already seen simple hacks and unsophisticated weaponizing of social media almost certainly influence global events for the worse, such as the US presidential election (Cambridge Analytica), the Brexit vote, the Rohingya genocide, suppression of women’s opinions in the Middle East, and many others. The Oxford Computational Propaganda Project catalogues these.

These simple hacks involve the use of human trolls, and some intelligent systems for information creation, testing and distribution. Common techniques involve bot armies to convey the illusion of frequency, thousands of versions of advertisements with reaction tracking to fine tune content, and the spread of fake news by ‘prestigious’ individuals and organizations. All these methods can be used to manipulate the way information flows through a population.

But this is just the tip of the iceberg.

In the above cases a user who is careful will be skeptical of much of the information presented by carefully comparing the messages received to the reality at large (though this involves effort). However, we are rapidly entering an era where reality at large will be manipulated.

Technology presently exists that can produce authentic sounding human audio, manipulate video seamlessly, remove or add individuals to images and video, create authentic looking video of individuals apparently saying things they did not say, and a wide range of other malicious manipulations.

A mainstay of these techniques is the use of generative adversarial networks (GANs), a kind of artificial intelligence that deploys machine learning to first categorize the world, then produce new content and refine that content until it is indistinguishable from the training dataset, and yet did not exist in the training dataset. Insofar as we believe video, audio and images document reality, GANs are starting to create reality.

Targeting this new ‘reality’ in the right ways can exploit the psychological biases, which we depend on in order to attend to what ought to be the most relevant and important information amid a sea of content.

We are all used to being phished these days. This is where an attempt is made to deceive us into acting in the interests of a hostile entity through the use of (often) an email or social media message. Phishing is often an attempt to obtain personal information, but can manifest as efforts to convince us to purchase products that are not in our interests.

The ‘Nigerian scams’ were some of the first such phishes, but techniques have advanced well beyond ‘Dear esteemed Sir or Madam…’

Convincing us of an alternate reality is the ultimate phish.

Laser phishing is the situation where the target is phished but the phish appears to be an authentic communication from a trusted source. Perhaps your ‘best friend’ messages you on social media, or your ‘boss’ instructs you to do something.

The message reads exactly as the genuine article, along with tone, colloquialisms and typical misspellings you’re used to the individual making. This is because machine learning techniques have profiled the ‘source’ of the phish and present an authentic seeming message. If this technique becomes advanced, scaled endlessly through automation, and frequently deployed, it will be necessary, but simply impossible, to verify the authenticity of every message and communication.

The mere existence of the technique (and others like it) will cast doubt on every piece of information you encounter every day.

But it gets worse.

Since GANs are starting to create authentic seeming video, we can imagine horror scenarios involving video of Putin or Trump or Kim Jong Un declaring war. I won’t dwell too much on these issues here, as I’ve previously posted on trust and authenticity, technology and society, freedom and AI, AI and human rights. Needless to say, things are getting much worse.

Part of the problem lies in the incentives that platform companies have for sustaining user engagement. We know that fake news spreads more widely than truth online. This tends to lead to promotion of sensationalist content and leaves the door wide open for malicious agents to leverage an attentive and psychologically profiled audience. The really big threat is when intelligent targeting (the automated laser phishing above) is combined with dangerous fake content.

These techniques (and many others) have not been widely deployed yet, and by ‘widely’ I mean that most of the digital content we encounter is not yet manipulated. But the productive potential of digital methods and the depth of insight about targets gleaned from shared user data is not bound by human limits.

We depend on the internet and digital content for almost everything we do or believe. Very soon more content will be fake than real. That is not an exaggeration. I’ll say it again, very soon more content will be fake than real. What will we make of that? Without true information we cannot track the world, we cannot progress.

A real possibility is that we come to question everything, even the true and useful things, and become apathetic toward reality by default.

That is the infopocalypse.

Tracking problems in our environment

It is critical for our long-term survival, success and wellbeing, that the information we obtain tracks the true challenges in our environment. If there is a mismatch between what we think is true and what really is true, then we will suffer as individuals and a species (think climate denial vs actual rising temperatures).

If we believe that certain kinds of beliefs and institutions are in our best long-term interests when they are not then we are screwed.

Bad information could lead to maladaptation and potentially to extinction. This is especially true if the processes that are leading us to believe the maladaptive information are impervious to change. There are a number of reasons why this might be so. The processes might be leveraging our cognitive biases, or they may be sustained by powerful automated entities or they may quell our desire for change through apparent reward.

There are many imaginable scenarios where the information we consume is almost all no good for us, it is ‘non-fitness tracking’ yet we lap it up anyway.

We’re seeing this endemically in the US at the moment. The very individuals who stand to lose the most from Trump’s health reform are the most vocal supporters of his policies. The very individuals who stand to gain most from a progressive agenda are sending pipe bombs in the mail.

Civil disorder and outright conflict are only (excuse the pun) a stones throw away.

This is the result of all the dynamics I’ve outlined above. Hard won rights, social progress, and stability are being eroded, and that will mean we forgo future value because if the world taught us anything in the 20th Century it’s that…

… peace is profitable.

If we can’t shake ourselves out of the trajectory we are on, then the trajectory is ‘evolutionarily stable’ to use Dawkins’ term from 1976. And to quote, ‘an evolutionarily stable strategy that leads to extinction… leads to extinction’.

This is not hyberbole, because as noted above, hothouse earth is an extinction possibility, nuclear war is an extinction possibility. If the rhetoric and emerging information manipulation techniques take us down one of these paths then that is our fate.

To reiterate, the threat of an infopocalypse is more pressing and more imminent than the threat of climate change, and we must address it immediately, with substantial resource investment, otherwise malicious content creating defectors will win.

As we advance technologically, we need to show restraint and patience and mitigate risks. This means a little research, policy and action, taken thoughtfully, rather than rushing to the precipice.

The battle against system perturbation and risk is an ongoing one, and many of the existing risks have not yet been satisfactorily mitigated. Nuclear war is a stand out contender for greatest as yet unmitigated threat (see my previous post on how we can keep nuclear weapons but eliminate the existential threat).

Ultimately, a measured approach will result in the greatest area under the future value curve.

So what should we do?

I feel like all the existing theory that I have outlined and linked to above is even more relevant today then when it was first published. I also feel like there are not enough people with broad generalist knowledge in these domains to see the big picture here. The threats are imminent, they are significant, and yet with few exceptions, they remain unseen.

We have the evolution theory, information dynamic theory, cognitive bias theory, and machine learning theory to understand and address these issues right now. But that fight needs resourcing and it needs to be communicated so a wider population understands the risks.

Solutions to this crisis, just like solutions to climate change will be multifaceted.

  • In the first instance we need more awareness that there is a problem. This will involve informing the public, technical debate, writing up horizon scans, and teaching in schools.
  • Children need to grow up with information literacy. I don’t mean just how to interpret a media text, or how to create digital content. I mean they need to learn how to distinguish real from fake, and how information spreads due to the system of psychological heuristics, network structure, frequency and source biases, and the content appeal of certain kinds of information. These are critical skills in a complex information environment and we have not yet evolved defenses against the current threats.
  • We need to harness metadata and network patterns to automatically watermark content and develop a ‘healthy content’ labelling system akin to healthy food labels, to inform consumers of how and why pieces of information have spread. We need to teach this labelling system widely. We need to fix fake news. (I’ve recently submitted a proposal to a philanthropically funded competition for research funding to contribute to exactly that project. And I have more ideas if others out there can help fund the research)
  • We need mechanisms, such as blockchain identity verification to subvert laser phishing.
  • We need to outlaw the impersonation of humans in text, image, audio or video.
  • We need to be vigilant to the technology of deception.
  • We need to consider the sources of our information and fight back with facts.
  • We need to reject the information polluting politics of populism.
  • We need to invest in cryptographic verification of images and audio.
  • We need to respect human rights whenever we deploy digital content.
  • We also need a local NZ summit on information pollution and reality apathy.

Summary

More needs to be done to ensure that activity at a local and global level is targeted rationally towards the most important issues and the most destabilizing risks. This means a rational calculus of the likely impact of various threats to society and the resources required for mitigation.

Looking at the kinds of issues that today ‘make the front page’ shows that this is clearly not happening at present (‘the Koru Club was full’ – I mean seriously!). And ironically the reasons for this are the very dynamics and processes of information appeal, dissemination and uptake that I’ve outlined above.

A significant amount is known about cultural informational ‘microevolutionary’ processes (both psychological and network-mediated) and it’s time we put this theory to work to solve our looming infopocalypse.

I am more than happy to speak pro bono or as a guest lecturer on these issues of catastrophic risk, the threat of digital content, or information evolution and cognitive biases.

If any organizations, think tanks, policy centers, or businesses wish to know more then please get in touch.

I am working on academic papers about digital content threat, catastrophic risk mitigation, and so on. However, the information threat is emerging faster than it can be published on.

Please fill out my contact form to get in touch.

 

Selected Further Reading:

Author’s related shorter blogs:

The problem of Trust and Authenticity

Technology and society

AI and human rights

AI Freedom and Democracy

Accessible journalism:

Helbing (2017) Will democracy survive big data and artificial intelligence

Warzel (2018) Fake news and an information apocalypse

Academic research:

Mesoudi (2017) Prospects for a science of cultural evolution

Creanza (2017) How culture evolves and why it matters

Acerbi (2016) A cultural evolution approach to digital media

Author’s article on AI Policy (2017) Rapid developments in AI

Author’s article on memetics (2008) The case for memes

Author’s MA thesis (2008) on Human Culture and Cognition

AI content targeting may violate human rights

Image result for human rights and computers

Does AI driven micro targeting of digital content violate human rights? The UN says ‘yes!’

Last month the United Nations published a document on AI and human rights with a particular focus on automated content distribution. The report focuses on the rights to freedom of opinion and expression, which are often excluded from public and political debates on artificial intelligence.

The overall argument is that an ethical approach to AI development, particularly in the area of content distribution, is not a replacement for respecting human rights.

Automation can be a positive thing, especially in cases where it can remove human operator bias. However, automation can be negative if it impedes the transparency and scrutability of a process.

AI dissemination of digital content

The report outlines the ways in which content platforms moderate and target content and how opaque AI systems could interfere with individual autonomy and agency.

Artificial intelligence is proving problematic in the way it is deployed to assess content and prioritize which content is shown to which users.

“Artificial intelligence evaluation of data may identify correlations but not necessarily causation, which may lead to biased and faulty outcomes that are difficult to scrutinize.”

Without ongoing supervision, AI systems may “identify patterns and develop conclusions unforeseen by the humans who programmed or tasked them.”

Browsing histories, user demographics, semantic and sentiment analyses and numerous other factors, are used to determine which content is presented to whom. Paid content often supplants unpaid content. The rationale behind these decisions is often opaque to users and often to platforms too.

Additionally, AI applications supporting digital searches massively influence the dissemination of knowledge and this personalization can minimize exposure to diverse views. Biases are reinforced and inflammatory content or disinformation is promoted as the system measures success by online engagement. The area of AI and human values alignment is begging for critical research and is discussed in depth by AI safety researcher Paul Christiano elsewhere.

From the point of view of human autonomy these systems can interfere with individual agency to seek and share ideas and opinions across ideological, political or societal divisions, undermining individual choice to find certain kinds of information.

This is especially so because algorithms typically will deprioritize content with lower levels of engagement (e.g. minority content). Also, the systems are often hijacked via bots, metadata hacks, and possibly by adversarial content.

Not only is much content obscured from many users, but otherwise well-functioning AI systems can be tripped up by small manipulations to the input. Without a ‘second look’ at the context (as our hierarchically structured human brain does when something seems amiss) AI can be fooled by ‘adversarial content’.

For example, in the images below the AI identifies the left picture as ‘fox’ and the slightly altered right picture as ‘puffer fish’. An equally striking example is the elephant vs sofa error , which is clearly due to a shift in context.

Screen Shot 2018-10-20 at 4.19.06 pm

Cultural context is particularly good at tripping up AI systems and results in content being removed due to biased or discriminatory concepts. For example, the DeepText AI identified “Mexican” as a slur due to the context of its use in textual analysis. Such content removal is another way that AI can interfere with user autonomy.

There is an argument that individuals should be exposed to parity and diversity in political messaging, but micro targeting of content is creating a “curated worldview inhospitable to pluralistic political discourse.”

Overall, AI targeting of content incentivizes broad collection of personal data and increases the risk of manipulation through disinformation. Targeting can exclude whole classes of users from information or opportunities.

So what should we be doing about all this? The UN report offers a vision for a human rights based approach to AI and content distribution.

A Human Rights Legal Framework for AI

The UN report outlines the scope of human rights obligations in the context of artificial intelligence and concludes that:

“AI must be deployed so as to be consistent with the obligations of States and the responsibilities of private actors under international human rights law. Human rights law imposes on States both negative obligations to refrain from implementing measures that interfere with the exercise of freedom of opinion and expression and positive obligations to promote rights to freedom of opinion and expression and to protect their exercise.”

What does this mean?

All people have the right to freedom of opinion without interference.

(this is guaranteed by article 19 (1) of the International Covenant on Civil and Political Rights and article 19 of the Universal Declaration of Human Rights)

Basically, undue coercion cannot be employed to manipulate an individual’s beliefs, ideologies, reactions and positions. We need to have a public discussion about the limits of coercion or inducement, and what might be considered interference with the right to form an opinion.

The reason that this is a novel issue is because AI curation of online content is now micro targeting information “at a scale beyond the reach of traditional media.” Our present norms (based on historical technologies) may not be up to the task of adjudicating on novel techniques.

The UN report argues that companies should:

“at the very least, provide meaningful information about how they develop and implement criteria for curating and personalizing content on their platforms, including policies and processes for detecting social, cultural or political biases in the design and development of relevant artificial intelligence systems.”

The right to freedom of expression may also be impinged by AI curation. We’ve seen how automated content takedown may run afoul of context idiosyncrasies. This can result in the systematic silencing of individuals or groups.

The UN Human Rights Committee has also found that States should “take appropriate action … to prevent undue media dominance or concentration by privately controlled media groups in monopolistic situations that may be harmful to a diversity of sources and views.”

Given these problems, more needs to be done to help users understand what they are presented with. There are some token gestures toward selectively identifying some sponsored content, but users need to be presented with relevant metadata, sources, and the alternatives to the content that they are algorithmically fed. Transparency, for example, about confidence measures, known failure scenarios and appropriate limitations on use would be of great use.

We all have a right to privacy, yet AI systems are presently used to infer private facts about us, which we may otherwise decline to disclose. Information such as sexual orientation, family relationships, religious views, health conditions or political affiliation can be inferred from network activity and even if not explicitly stated these inferences can be represented implicitly in neural nets and drive content algorithms.

These features of AI systems could violate the obligation of non-discrimination.

Finally, human rights law guarantees individuals whose rights are infringed, a remedy determined by competent judicial, administrative or legislative authorities. Remedies “must be known by and accessible to anyone who has had their rights violated,” but the logic behind an algorithmic decision may not be evident even to an expert trained in the underlying mechanics of the system.

Solutions & Standards

We need a set of substantive standards for AI systems. This must apply to companies and to States.

Companies need professional standards for AI engineers, which translate human rights responsibilities into guidance for technical design. Codes of ethics (such as those now adopted by most of the major AI companies) may be important but are not a substitute for recognition of human rights.

Human rights law is the correct framework within which we must judge the performance of AI content delivery systems.

Companies and governments need to embrace transparency and simple explanations about the functioning of systems will go a long way to contribute to public discourse, education and debate on this issue.

The UN report also recommends processes for artificial intelligence systems that include: human rights impact assessments, audits, a respect for personal autonomy, notice and consent processes, and remedy for adverse impacts.

The report concludes with recommendations for States and for Companies, which includes the recommendation that, “companies should make all artificial intelligence code fully auditable.”

All this sounds very sensible and is a conservative approach to what could rapidly become an out of control problem of information pollution.

If anyone is interested in my further thoughts on “AI, Freedom and Democracy”, you can listen to my talk at the NZ Philosophy Conference 2017 here.

Nuclear insanity has never been worse

nuclear_winter_podcast-1030x466

Donald Trump has just announced a likely build up of US nuclear capability

The threat of nuclear war has probably never been higher, and continues to grow. Given emotional human nature, cognitive irrationality and distributed authority to strike, we have merely been lucky to avoid nuclear war to date.

These new moves without a doubt raise the threat of a human extinction event in the near future. The reasons why are explained in a compelling podcast by Daniel Ellsberg

Ellsberg (the leaker of the Pentagon Papers that ended the Nixon presidency) explains the key facts.  Contemporary modelling shows the likelihood of a nuclear winter is high if more than a couple of hundred weapons are detonated. Previous Cold War modelling ignored the smoke from burning radioactive fires, and so vastly underestimated the risk.

On the other hand, detonation of a hundred or so warheads poses low or no risk of nuclear winter (merely catastrophic destruction). As such, and as nuclear strategist Ellsberg forcefully argues, the only strategically relevant nuclear weapons are those on submarines. This is because they cannot be targeted by pre-emptive strikes, and yet still (with n = 300 or so) provide the necessary deterrence.

Therefore, land-based ICBMs are of no strategic value whatsoever, and merely provide additional targets for additional weapons, thereby pushing the nuclear threat from the deterrence/massive destruction game into the human extinction game. This is totally unacceptable.

Importantly, Ellsberg further argues that the reason the US is so determined to continue to maintain and build nuclear weapons is because of the billions of dollars that it generates in business for Lockhead Martin, Boeing, etc. We are escalating the risk of human extinction in exchange for economic growth.

John Bolton, Trump’s National Security Advisor, is corrupted by the nuclear lobbyists and stands to gain should capabilities be expanded.

There is no military justification for more than a hundred or so nuclear weapons (China’s nuclear policy reflects this – they are capable of building many thousands, but maintain only a fraction of this number). An arsenal of a hundred warheads is an arsenal that cannot destroy life on planet Earth. If these are on submarines they are difficult to target. Yet perversely we sustain thousands of weapons, at great risk to our own future.

The lobbying for large nuclear arsenals must stop. The political rhetoric that this is for our own safety and defence must stop. The drive for profit above all else must stop. Our children’s future depends on it.

Growing the AI talent pool: We need deep learning

The AI Forum NZ recently kicked-off six working groups to investigate a range of emerging issues around artificial intelligence and society.

Working group #5 has its focus on Growing the AI Talent Pool.

New Zealand is facing a shortage of technical workers capable of developing AI applications. In what follows I argue that ‘growing’ is the right metaphor to apply to responsibly solving this problem in the long term.

We will clearly need to increase the size of the available talent pool. That will be a multifactorial task that includes increasing the numbers of people choosing AI, data science and machine learning as a career; increasing the throughput of formal learning institutions; increasing the availability and uptake of on-the-job and mid career training; and increasing the supply of talent from outside of New Zealand.

However, having an ideal talent pool is not merely about the numbers, it is also about ensuring that the talent we grow is the right kind of talent, with the right traits and characteristics to best to enable a prosperous, inclusive and thriving future New Zealand. This means developing skills that go beyond technical capability. It also means ensuring that non-technical specialists understand machine learning and the capabilities of AI in order to make optimal and ethical use of it.

Impacting society at scale

With any technology that affects society at scale (as AI can clearly do) we have an obligation to develop it responsibly. In the past the industrial revolution was poorly managed resulting in exploitation of factory labour. Technological innovation in the Twentieth Century began the catastrophe of atmospheric pollution. More recently, we can note that:

“In the past, our society has allowed new technologies to diffuse widely without adequate ethical guidance or public regulation. As revelations about the misuse of social media proliferate, it has become apparent that the consequences of this neglect are anything but benign. If the private and public sectors can work together, each making its own contribution to an ethically aware system of regulation for AI, we have an opportunity to avoid past mistakes and build a better future.” – William A. Galston (Ezra K. Zilkha Chair in Governance Studies)

When the things we do and create impact society at scale there is a responsibility to get it right. That’s why we have professional certifications and a structured programme of non-technical knowledge and skill learning embedded in courses such as engineering, medicine, and law. Take for example the University of Auckland’s Medical Humanities programme and compare that to the course list for University of Otago’s computer science department, where only one paper mentions ethics as a designated component.

AI talent origin stories

Furthermore, machine learning practitioners and AI developers do not come from any one particular development pipeline. You do not need to specifically have a PhD in AI to fill these roles. AI practitioners can come from any mathematically rigorous training programme. Graduates in computer science, math, physics, finance, logic, engineering, and so on, often transition to AI and machine learning.

One glaring issue is that some of these generalist disciplines do not have a programme of social responsibility and professional ethics embedded in them (engineering may be an exception). Nor are there professional certification requirements for a lot of these skilled workers. This is in stark contrast to other professional disciplines such as accounting, law, nursing, teaching, medicine, and many others.

Social responsibility and professional ethics

To ensure responsible development of the developers we either need to embed responsibility development in all these programmes that can lead to AI practice, or take the whole thing a step further back to high school, or, stepping vertically, we need to ensure institutional codes and professional regulation. Probably all these are required.

Society expects the developers of intelligence to respect public institutions, privacy, and people as autonomous agents among many other things. We do not want to be phished for phools for profit or to further an agenda. Just because something affecting society is possible does not mean it is automatically acceptable.

Just like medical writers sign up to a code of medical writing ethics to push back and rein in the whims of Big Pharma (who employ most of them), we need to have faith in the talent pool who will be developing AI if it affects us all.

The problem may not be so great when workers are employed by businesses that are ethical, socially responsible, and whose aims are aligned with those of societal flourishing. It can be argued that several of the big tech firms are moving in this direction. IBM, Google and Microsoft, for example, have published ethical and/or social codes for development of AI in 2018. But not all developers will migrate from their technical training into socially responsible firms.

IBM’s Everyday Ethics for AI report notes the following:  “Nearly 50% of the surveyed developers believe that the humans creating AI should be responsible for considering the ramifications of the technology. Not the bosses. Not the middle managers. The coders.” Mark Wilson, Fast Company on Stack Overflow’s Developer Survey Results 2018

Growing true AI talent through deep learning

Growing the talent pool is an appropriate metaphor. We do not just want a wider harvest of inadequate talent, nor do we merely want the planting of many more average seeds. We also need to choose the right educational soil and to add the right fertilizer of ideas, concepts and socially responsible skills.

Intervention is needed at three levels and across three time horizons. We need broad social, ethical, civics, and society education prior to the choice of a career specialization.

We need to cross-fertilize tertiary training in all disciplines that lead into AI practice with courses and dialogue on social responsibility, human flourishing, ethics, law, democracy and rights. And we need to ensure that professional engineering, AI and machine learning institutions mandate adherence to appropriate codes of conduct.

We need deep learning around all these issues from early on.

We need to begin now with current practitioners, we need to foster these ideals in those who have selected AI as a career, and we need to prepare the future generation of AI talent.

If the tech specialists don’t see the force and necessity of these points, then that in itself proves the truth of the argument.

Who is responsible?

Here I am with no background in AI or machine learning telling those who would make a career in these fields that they must study soft skills too. So why do all our voices count in this space?

We are talking about the applications of intelligence, and as intelligent beings we are all qualified to talk about how intelligence is distributed in society, how it is deployed and what functions it has.

When you go to a conference on nuclear physics everyone at the conference may be a nuclear physicist. But those that develop a technology are not automatically those who get to decide how we use it.

We see this when policy makers, ethicists and the public determine whether those nuclear physicists are permitted to detonate atomic weapons. We see this when the committees at the FDA determine whether medical technologies and pharmaceuticals will be licensed for use in society. AI and machine learning applications bear much in common with these other domains.

With great intelligence (and the development of great intelligence) comes great responsibility.

Importance and urgency

The reason all this is important is because digital technology now infuses every domain in society, and AI is rapidly becoming an integral part of Law, Medicine, Engineering, and every other professional discipline. We are going to need professionals who understand AI, but we are also going to need technical developers who understand the professional aspects.

There are tasks in society that are urgent and those that are important. There are interventions that will have a narrow impact and those that will have a wide ranging impact. In addressing those issues that are urgent and narrow (and therefore manageable) we cannot forget the issues that are ongoing and less well-defined, but highly impactful.

The most important things moving forward are to ensure a just and cohesive society that supports democratic institutions and upholds social norms and rights; a society that does not use exploitation or manipulation as key processes for generating profit. A society in which technological innovation respects the evolution of institutions.

We must ensure that as a society we develop a pool of talented, socially aware, and responsible AI practitioners.

Efficient Cancer Care in New Zealand: Lessons from five years of Australian research literature

stethoscope

The cost of cancer care is rising and a review of the research literature on cancer care in Australia can teach many lessons to us in New Zealand.

In Australia real costs for cancer per person (adjusting for inflation) have more than doubled in the last 25 years. The drivers are multifactorial but due in part to upward trends in diagnosis (often the result of new diagnostic methods and screening programmes), the rising cost of cancer pharmaceuticals, and increasing expectations.

The largest costs are treatment costs. Taking Australia as an example, hospital services, including day admitted patients (usually for chemotherapy), account for 79% of cancer costs. The number of approved cancer medicines has doubled since 2013.

Rising costs in health care are not sustainable. We need better efficiency.

Efficiency in health is about making choices that maximise the health outcomes gained from the resources allocated. And it seems like there are a number of different ways that we could target the cancer care pathway to improve efficiency. However, this can only work if the entire care path is looked at as a whole, and the notions of funding silos are dispensed with.

Prevention

For example, healthy lifestyle and regular screening could prevent an estimated one third to one half of all cancers, but presently, only single-figure percentages of cancer funding target prevention.

This is despite the modelled return on investment for cancer prevention programmes, which is often $3–$4 per $1 spent. As an added bonus, cancer prevention can also reduce the burden of other diseases (e.g. reducing inactivity can also benefit diabetes and heart disease).

Screening

Participation rates in screening programmes are generally poor. For many programmes 40–60% is considered a good uptake. This is inadequate. Increasing screening rates is likely to increase the effectiveness of screening programs. And modelling suggest in some cases that sufficient uptake can lead to future cost savings.

We should also do more to ensure that patients who are up-to-date with screening are not re-screened (e.g. those who have had recent colonoscopy) and ensure that follow-up after screening is based on guidelines. It often isn’t.

Diagnosis

Over-diagnosis is becoming a problem in the cancer care path. Breast screening often reveals anomalies that are not cancer. Artificial intelligence systems used to augment physician diagnosis could curb this.

Not only is there evidence from a 2015 systematic review that prostate cancer screening is not cost effective, but prostate screening with PSA can lead to cancer diagnoses (and treatment) in men whose tumors will never cause them problems.

There has also been a rapid spike in thyroid cancer diagnoses, leading to an increase in thyroid surgeries, for example in Australia, but no corresponding change in deaths from thyroid cancer.

Reducing unnecessary detection and a conservative approach could lead to millions of dollars in savings and reduced harms to patients from over-diagnosis.

Treatment

The cost of treatment is also a problem. In Australia, cancer accounts for 6.6% of hospital costs, but the cost of cancer medication is one sixth of the total pharmaceutical budget. The 10-fold increase in cost of these medicines over 10 years is a serious threat to patients and health systems.

We could decrease the costs of cancer medications by modifying prescription habits, considering treatment costs in professional guidelines, disinvesting in medicines that have not proven cost-effective in the real world, improving patient selection, and increasing use of generics.

There is evidence of over-treatment. A watch and wait approach is appropriate for many prostate cancers in the early phase, or active surveillance of low risk patients could reduce costs and is often clinically reasonable.

We could consider pharmacist review of prescriptions to avoid the risk of adverse drug reactions (and the associated treatment costs). We could do more to ensure there are no unjustified variations in clinical practice.

Follow-up

We should ensure that patients have a written care plan and are not receiving follow-up from multiple overlapping providers. Also, follow-up should be guideline based. Some studies indicate that less than half of bowel cancer patients received guideline-based follow-up colonoscopy.

We could make more use of primary care where studies have not shown hospital follow-up to be any more effective in detecting recurrent disease.

Traditional follow-up focuses on detecting cancer recurrence, but this can fail to adequately address many survivors’ concerns. Getting back to work (and being supported to do so is important to reduce the societal costs of cancer. Occupational therapy may be important in facilitating this.

Palliation

Palliative care costs less than hospital care and is under-utilised. But to optimise the use of out-of-hospital palliative care, patients need to have accurate prognostic awareness, allowing them to make informed choices. This requires important conversations with treatment providers. Lack of a palliative care plan leads to unnecessary emergency room visits and hospital admissions that are primarily palliative.

Research

Research costs could also be streamlined. We should ensure that the cancer research being undertaken reflects the burden of cancer. Lung cancer has the greatest burden of all cancer (especially in terms of years of life lost) and yet there is far less lung cancer research than this burden demands.

Cancers including leukaemia, breast, ovary, liver and skin, often receive proportionately more funding than their disease burden. Prevention, cancer control, and survivorship research could be funded more. This is because effectiveness in these components of the care path lead to downstream cost savings and potentially increased social productivity.

Summary

Overall, it looks like prevention and early detection are generally underfunded. There is also scope to increase participation in screening programs.

The rapidly rising costs of treatment, including medications, need to be curtailed through wise practice, and new models of care, that prioritise prevention, screening, surveillance, guideline and evidence-based follow-up, return to work, and palliative care where appropriate.

Reducing the cost of medications is a high priority, with large potential cost savings. The focus should be on treatments that are proven to work well in the real world rather than on increasing use of expensive drugs with marginal benefit.

We need a long-term focus including a culture of change and workforce planning. Further efficiencies might be gained through initiatives such as: Choosing Wisely, addressing variations in process and treatment, minimising non-adherence to treatment, avoiding communication failures, ceasing ineffective interventions, coordinating care, reducing admissions, using generics, negotiating price, reducing adverse events, taking a societal perspective of costs, and considering upfront cost versus long-term impact.

Further Reading

Ananda, S., Kosmider, S., Tran, B., Field, K., Jones, I., Skinner, I., . . . Gibbs, P. (2016). The rapidly escalating cost of treating colorectal cancer in Australia. Asia-Pacific Journal of Clinical Oncology, 12(1), 33-40.

Chen, C. H., Kuo, S. C., & Tang, S. T. (2017). Current status of accurate prognostic awareness in advanced/terminally ill cancer patients: Systematic review and meta-regression analysis. Palliative Medicine, 31(5), 406-418.

Colombo, L. R. P., Aguiar, P. M., Lima, T. M., & Storpirtis, S. (2017). The effects of pharmacist interventions on adult outpatients with cancer: A systematic review. Journal of Clinical Pharmacy and Therapeutics, 42(4), 414-424.

Cronin, P., Kirkbride, B., Bang, A., Parkinson, B., Smith, D., & Haywood, P. (2017). Long-term health care costs for patients with prostate cancer: a population-wide longitudinal study in New South Wales, Australia. Asia-Pacific Journal of Clinical Oncology, 13(3), 160-171.

Doran, C. M., Ling, R., Byrnes, J., Crane, M., Shakeshaft, A. P., Searles, A., & Perez, D. (2016). Benefit cost analysis of three skin cancer public education mass-media campaigns implemented in New South Wales, Australia. PLoS ONE, 11 (1).

Furuya-Kanamori, L., Sedrakyan, A., Onitilo, A. A., Bagheri, N., Glasziou, P., & Doi, S. A. R. (2018). Differentiated thyroid cancer: Millions spent with no tangible gain? Endocrine-Related Cancer, 25(1), 51-57.

Gordon, L. G., Tuffaha, H. W., James, R., Keller, A. T., Lowe, A., Scuffham, P. A., & Gardiner, R. A. (2018). Estimating the healthcare costs of treating prostate cancer in Australia: A Markov modelling analysis. Urologic Oncology: Seminars and Original Investigations, 36(3), 91.e97-91.e15.

Jefford, M., Rowland, J., Grunfeld, E., Richards, M., Maher, J., & Glaser, A. (2013). Implementing improved post-treatment care for cancer survivors in England, with reflections from Australia, Canada and the USA. British Journal of Cancer, 108(1), 14-20.

Langton, J. M., Blanch, B., Drew, A. K., Haas, M., Ingham, J. M., & Pearson, S.-A. (2014). Retrospective studies of end-of-life resource utilization and costs in cancer care using health administrative data: A systematic review. Palliative Medicine, 28(10), 1167-1196. doi:10.1177/0269216314533813

Lao, C., Brown, C., Rouse, P., Edlin, R., & Lawrenson, R. (2015). Economic evaluation of prostate cancer screening: A systematic review. Future Oncology, 11(3), 467-477.

Leggett, B. A., & Hewett, D. G. (2015). Colorectal cancer screening. Internal Medicine Journal, 45(1), 6-15.

Ma, C. K. K., Danta, M., Day, R., & Ma, D. D. F. (2018). Dealing with the spiralling price of medicines: issues and solutions. Internal Medicine Journal, 48(1), 16-24.

MacLeod, T. E., Harris, A. H., & Mahal, A. (2016). Stated and Revealed Preferences for Funding New High-Cost Cancer Drugs: A Critical Review of the Evidence from Patients, the Public and Payers. The Patient, 9(3), 201-222. doi:10.1007/s40271-015-0139-7.

Olver, I. (2018). Bowel cancer screening for women at midlife. Climacteric, 21(3), 243-248.

Shih, S. T., Carter, R., Heward, S., & Sinclair, C. (2017). Economic evaluation of future skin cancer prevention in Australia. Preventive Medicine, 99, 7-12.

Wait, S., Han, D., Muthu, V., Oliver, K., Chrostowski, S., Florindi, F., & de Lorenzo, F. (2017). Towards sustainable cancer care: Reducing inefficiencies, improving outcomes—A policy report from the All.Can initiative. Journal of Cancer Policy, 13, 47-64.

Youl, P., Baade, P., & Meng, X. (2012). Impact of prevention on future cancer incidence in Australia. Cancer Forum, 36(1).

Economic evidence for closing border in a severe pandemic: now we need the values discussion

Image result for bioweapons

From the University of Otago Press Release about our latest study:

Closing the border may make sense for New Zealand in some extreme pandemic situations, according to a newly published study of the costs and benefits of taking this step.

“In a severe pandemic, timely/effective border closure could save tens of thousands of New Zealand lives far outweighing the disruptions to the economy and the temporary end to tourism from international travellers,” says one of the authors Professor Nick Wilson from the University of Otago, Wellington.

“This finding is consistent with work that we published last year – except our new study used a more sophisticated model developed by the New Zealand Treasury for performing cost-benefit analyses,” says Professor Wilson.

The research has just been published in the Australian and New Zealand Journal of Public Health.

Another of the authors, Dr Matt Boyd comments that with increasing risks of new pandemics due to the growing density of human populations and various socio-economic, environmental and ecological factors, there is a need to look at different scenarios for better pandemic planning.

Read the full press release here

The study is published here

What does taking an ethical approach to digital media and artificial intelligence mean?

Image result for digital ethics

Several recent reports have argued that we need to take ‘an ethical approach’ when designing and using digital technologies, including artificial intelligence.

Recent global events such as various Facebook, fake news, and Cambridge Analytica scandals appear to emphasize this need.

But what does taking an ethical approach entail? Here are a few ideas I’ve spent Sunday morning thinking about.

Most importantly, there is no one thing that we can do to ensure that we design and use digital technology ethically. We need to start embarking on a multifaceted approach that ensures we act as we ought to moving forward.

There is onus upon governments, developers, content creators, users, educators and society generally. We need to ensure that we act ethically, and also that we can spot unethical actors. This requires a degree of ethical literacy, information and media literacy, and structuring our world in such a way that the right thing is easier to do.

This will involve a mix of education, regulation, praise and condemnation, and civility.

Truth and deception

Generally we accept that true information is good and false information is bad. We live our lives under the assumption that most of what other people say to us is true. If most of what people say to each other were not true then communication would be useless. The underlying issue is one of trust. A society that lacks trust is dysfunctional. In many cases the intent behind falsehoods is to deceive or manipulate.

A little reflection shows that manipulation does not adhere with our values of democracy, autonomy, freedom, and self-determination. And it is these values (or others like them, which we can generally agree upon) that need to underpin our decisions about digital technology. If the technology, or the specific implementation of it, undermines our values then it ought to be condemned. Condemnation is often enough to cause a back-track, but where condemnation does not work, we need to regulate.

Misinformation and fake news are the current adaptive problem in our society, and we need to start developing adaptations to this challenge and driving these adaptations into our cultural suite of curriculum and norms.

Human beings have spent hundreds of thousands of years evolving psychological defenses against lies and manipulation when in close contact with other humans in small societies. We are very good at telling when we’re being deceived or manipulated in this context. However, many of the psychological and digital techniques used to spread fake news, sustain echo chambers, and coerce users are new and we do not yet have an innate suite of defenses. We need to decide whether it is unfair for governments, platforms, advertisers, and propagandists to use techniques we are not psychologically prepared for. We need to decide collectively if this kind of content is OK or needs to be condemned.

A values-based approach

Agreeing upon our values can be tricky, as political debates highlight. However, there is common ground, for example all sides of the political spectrum can usually agree that democracy is something worth valuing. We ought to have ongoing discussions that continually define and refine our values as a collective society.

It is not merely a case of ‘well that’s just the way it is’, we have the power to decide what is OK and what is not, but that depends on our values. Collective values can only be determined through in-depth discussion. We need community meetings, hui, focus groups, citizen juries, and surveys. We need to build up a picture of the foundations for an ethical approach.

Many of the concerns around digital media and AI algorithms are not new problems, they are existing threats re-packaged. Threats such as coercion and manipulation, hate and prejudice. Confronted with the task of designing intelligence, we are starting to say, ‘this application looks vulnerable to bias, or that application seems to undermine democracy…’

With great intelligence comes great responsibility

It’s not really AI or digital tech per se that we need to be ethical about, AI is just the implementation of intelligence, what we need to be ethical about is what we use intelligence for, and the intentions of those behind the deployment of intelligent methods.

We need to reflect on just what intelligence ought to be used for. And if we remember that we are intelligent systems ourselves, we ought to also reflect upon the moral nature of our own actions, individually and as a society. If we wouldn’t want an algorithm or AI to act in a certain way because it is biased or exploitative or enhances selfish interests, spreads falsehoods, or unduly concentrates power, should we be acting in that way ourselves in our day to day lives? Digital ethics starts with ethical people.

Developing digitally ethical developers

Many behaviours that were once acceptable have become unacceptable over time, instances of this are sometimes seen as ethical progress. It is possible that future generations will approach digital technologies in more inclusive and constructive ways than we’re seeing at present. In order to ensure that future digital technologies are developed ethically, we need to take a ground up approach.

School curricula need to include lessons to empower the next generation with an understanding of math and logic (so that the systems can be appreciated), language (to articulate concern constructively), ethical reasoning (not by decreeing morality, but by giving students the tools to reason in ethical terms), information literacy (this means understanding the psychological, network, and population forces that drive which information thrives and why), and epistemology (how to determine what is fact and what is not and why this matters). With this suite of cognitive tools, future developers will be more likely to make ethical decisions.

Ethical standards

As well as ensuring we bring up ethical developers, we need to make sure that the rules for development are ethical. This means producing a set of engineering standards surrounding the social impact of digital technologies. Much work has been published on the ethics of techniques like nudging and we need to distil this body of literature into guidance for acceptable designs. It may be the case that we need to certify developers who have signed up to such guidance or mandate such agreement as with other professional bodies.

As we build our world we build our ethics

The way we structure our world has impacts on how we behave and what information we access. Leaving your keys by the door reduces your chance or forgetting them, and building algorithms that reward engagement increases the chance of echo chambers and aggression.

We need to structure our world for civility by modeling and rewarding civil digital behavior. Mechanisms for community condemnation, down-voting, and algorithms that limit the way that digital conflict can spread may all be part of the solution. Sure, you can say what you want, but it shouldn’t be actively promoted unless it’s true (and evidenced) and decent.

We know from research that uncivil digital interactions decrease trust and this could undermine the values we hold as a society. Similarly we know that diverse groups make the best decisions, so platforms shouldn’t isolate groups of echo chambers. An ethical approach would ensure diverse opinions are heard by all.

Finally, from the top down

The relationship between legislation and norms is one of chicken and egg. Just as upholding norms can drive legislation, legislating can also drive norms.

It might be useful to have regulatory bodies such as ethical oversight committees, just as we have for medical research (another technological domain with the potential to impact the wellbeing of many people). Ethics committees can evaluate proposals or implemented technology and adjudicate on changes or modifications required for the technology to be ethically acceptable. Perhaps society could refer questionable technologies to these committee for evaluation, and problematic designs are sent for investigation. Our engineering standards, and our collectively agreed values, and tests of any intent to exploit and so on can then be applied.

Often an ethical approach means applying the ethics that we have built up over decades to a new domain, rather than dreaming up new ethics on the spot. It probably ought to be the case that we apply existing frameworks such as broadcasting and advertising standards, content restriction, and so on, to novel means of information distribution. Digital technologies should not undermine any rights that we have agreed in other domains.

False or unsupported claims are not permitted in advertising because we protect the right of people to not be misled and exploited. As a society we ought to condemn unsupported and false claims in other domains too, because of the risk of exploitation and manipulation for the gain of special interest groups.

The take home

Digital ethics (especially digital media) should be about ensuring that technology does not facilitate exploitation, deception, division, or undue concentration of power. Our digital ethics should protect existing rights and ensure that wellbeing is not impinged. To ensure this we need to take a multi-faceted approach with a long-term view. Through top-down, bottom-up and world structuring approaches, little by little we can move into an ethical digital world.

Do we care about future people?

dead planet

We have just published an article (free online) on existential risks – with a NZ perspective.1 A blog version is hosted by SciBlogs. What follows is the introduction to that blog:

Do we value future people?

Do we care about the wellbeing of people who don’t yet exist? Do we care whether the life-years of our great-grandchildren are filled with happiness rather than misery? Do we care about the future life-years of people alive now?

We are assuming you may answer “yes” in general terms, but in what way do we care?

You might merely think, ‘It’d be nice if they are happy and flourish’, or you may have stronger feelings such as, ‘they have as much right as me to at least the same kind of wellbeing that I’ve had’. The point is that the question can be answered in different ways.

All this is important because future people, and the future life-years of people living now, face serious existential threats. Existential threats are those that quite literally threaten our existence. These include runaway climate change, ecosystem destruction and starvation, nuclear war, deadly bioweapons2, asteroid impacts, or artificial intelligence (AI) run amok3 to name just a few…

Click here to read the full blog.

%d bloggers like this: