AI content targeting may violate human rights

Image result for human rights and computers

Does AI driven micro targeting of digital content violate human rights? The UN says ‘yes!’

Last month the United Nations published a document on AI and human rights with a particular focus on automated content distribution. The report focuses on the rights to freedom of opinion and expression, which are often excluded from public and political debates on artificial intelligence.

The overall argument is that an ethical approach to AI development, particularly in the area of content distribution, is not a replacement for respecting human rights.

Automation can be a positive thing, especially in cases where it can remove human operator bias. However, automation can be negative if it impedes the transparency and scrutability of a process.

AI dissemination of digital content

The report outlines the ways in which content platforms moderate and target content and how opaque AI systems could interfere with individual autonomy and agency.

Artificial intelligence is proving problematic in the way it is deployed to assess content and prioritize which content is shown to which users.

“Artificial intelligence evaluation of data may identify correlations but not necessarily causation, which may lead to biased and faulty outcomes that are difficult to scrutinize.”

Without ongoing supervision, AI systems may “identify patterns and develop conclusions unforeseen by the humans who programmed or tasked them.”

Browsing histories, user demographics, semantic and sentiment analyses and numerous other factors, are used to determine which content is presented to whom. Paid content often supplants unpaid content. The rationale behind these decisions is often opaque to users and often to platforms too.

Additionally, AI applications supporting digital searches massively influence the dissemination of knowledge and this personalization can minimize exposure to diverse views. Biases are reinforced and inflammatory content or disinformation is promoted as the system measures success by online engagement. The area of AI and human values alignment is begging for critical research and is discussed in depth by AI safety researcher Paul Christiano elsewhere.

From the point of view of human autonomy these systems can interfere with individual agency to seek and share ideas and opinions across ideological, political or societal divisions, undermining individual choice to find certain kinds of information.

This is especially so because algorithms typically will deprioritize content with lower levels of engagement (e.g. minority content). Also, the systems are often hijacked via bots, metadata hacks, and possibly by adversarial content.

Not only is much content obscured from many users, but otherwise well-functioning AI systems can be tripped up by small manipulations to the input. Without a ‘second look’ at the context (as our hierarchically structured human brain does when something seems amiss) AI can be fooled by ‘adversarial content’.

For example, in the images below the AI identifies the left picture as ‘fox’ and the slightly altered right picture as ‘puffer fish’. An equally striking example is the elephant vs sofa error , which is clearly due to a shift in context.

Screen Shot 2018-10-20 at 4.19.06 pm

Cultural context is particularly good at tripping up AI systems and results in content being removed due to biased or discriminatory concepts. For example, the DeepText AI identified “Mexican” as a slur due to the context of its use in textual analysis. Such content removal is another way that AI can interfere with user autonomy.

There is an argument that individuals should be exposed to parity and diversity in political messaging, but micro targeting of content is creating a “curated worldview inhospitable to pluralistic political discourse.”

Overall, AI targeting of content incentivizes broad collection of personal data and increases the risk of manipulation through disinformation. Targeting can exclude whole classes of users from information or opportunities.

So what should we be doing about all this? The UN report offers a vision for a human rights based approach to AI and content distribution.

A Human Rights Legal Framework for AI

The UN report outlines the scope of human rights obligations in the context of artificial intelligence and concludes that:

“AI must be deployed so as to be consistent with the obligations of States and the responsibilities of private actors under international human rights law. Human rights law imposes on States both negative obligations to refrain from implementing measures that interfere with the exercise of freedom of opinion and expression and positive obligations to promote rights to freedom of opinion and expression and to protect their exercise.”

What does this mean?

All people have the right to freedom of opinion without interference.

(this is guaranteed by article 19 (1) of the International Covenant on Civil and Political Rights and article 19 of the Universal Declaration of Human Rights)

Basically, undue coercion cannot be employed to manipulate an individual’s beliefs, ideologies, reactions and positions. We need to have a public discussion about the limits of coercion or inducement, and what might be considered interference with the right to form an opinion.

The reason that this is a novel issue is because AI curation of online content is now micro targeting information “at a scale beyond the reach of traditional media.” Our present norms (based on historical technologies) may not be up to the task of adjudicating on novel techniques.

The UN report argues that companies should:

“at the very least, provide meaningful information about how they develop and implement criteria for curating and personalizing content on their platforms, including policies and processes for detecting social, cultural or political biases in the design and development of relevant artificial intelligence systems.”

The right to freedom of expression may also be impinged by AI curation. We’ve seen how automated content takedown may run afoul of context idiosyncrasies. This can result in the systematic silencing of individuals or groups.

The UN Human Rights Committee has also found that States should “take appropriate action … to prevent undue media dominance or concentration by privately controlled media groups in monopolistic situations that may be harmful to a diversity of sources and views.”

Given these problems, more needs to be done to help users understand what they are presented with. There are some token gestures toward selectively identifying some sponsored content, but users need to be presented with relevant metadata, sources, and the alternatives to the content that they are algorithmically fed. Transparency, for example, about confidence measures, known failure scenarios and appropriate limitations on use would be of great use.

We all have a right to privacy, yet AI systems are presently used to infer private facts about us, which we may otherwise decline to disclose. Information such as sexual orientation, family relationships, religious views, health conditions or political affiliation can be inferred from network activity and even if not explicitly stated these inferences can be represented implicitly in neural nets and drive content algorithms.

These features of AI systems could violate the obligation of non-discrimination.

Finally, human rights law guarantees individuals whose rights are infringed, a remedy determined by competent judicial, administrative or legislative authorities. Remedies “must be known by and accessible to anyone who has had their rights violated,” but the logic behind an algorithmic decision may not be evident even to an expert trained in the underlying mechanics of the system.

Solutions & Standards

We need a set of substantive standards for AI systems. This must apply to companies and to States.

Companies need professional standards for AI engineers, which translate human rights responsibilities into guidance for technical design. Codes of ethics (such as those now adopted by most of the major AI companies) may be important but are not a substitute for recognition of human rights.

Human rights law is the correct framework within which we must judge the performance of AI content delivery systems.

Companies and governments need to embrace transparency and simple explanations about the functioning of systems will go a long way to contribute to public discourse, education and debate on this issue.

The UN report also recommends processes for artificial intelligence systems that include: human rights impact assessments, audits, a respect for personal autonomy, notice and consent processes, and remedy for adverse impacts.

The report concludes with recommendations for States and for Companies, which includes the recommendation that, “companies should make all artificial intelligence code fully auditable.”

All this sounds very sensible and is a conservative approach to what could rapidly become an out of control problem of information pollution.

If anyone is interested in my further thoughts on “AI, Freedom and Democracy”, you can listen to my talk at the NZ Philosophy Conference 2017 here.

Nuclear insanity has never been worse

nuclear_winter_podcast-1030x466

Donald Trump has just announced a likely build up of US nuclear capability

The threat of nuclear war has probably never been higher, and continues to grow. Given emotional human nature, cognitive irrationality and distributed authority to strike, we have merely been lucky to avoid nuclear war to date.

These new moves without a doubt raise the threat of a human extinction event in the near future. The reasons why are explained in a compelling podcast by Daniel Ellsberg

Ellsberg (the leaker of the Pentagon Papers that ended the Nixon presidency) explains the key facts.  Contemporary modelling shows the likelihood of a nuclear winter is high if more than a couple of hundred weapons are detonated. Previous Cold War modelling ignored the smoke from burning radioactive fires, and so vastly underestimated the risk.

On the other hand, detonation of a hundred or so warheads poses low or no risk of nuclear winter (merely catastrophic destruction). As such, and as nuclear strategist Ellsberg forcefully argues, the only strategically relevant nuclear weapons are those on submarines. This is because they cannot be targeted by pre-emptive strikes, and yet still (with n = 300 or so) provide the necessary deterrence.

Therefore, land-based ICBMs are of no strategic value whatsoever, and merely provide additional targets for additional weapons, thereby pushing the nuclear threat from the deterrence/massive destruction game into the human extinction game. This is totally unacceptable.

Importantly, Ellsberg further argues that the reason the US is so determined to continue to maintain and build nuclear weapons is because of the billions of dollars that it generates in business for Lockhead Martin, Boeing, etc. We are escalating the risk of human extinction in exchange for economic growth.

John Bolton, Trump’s National Security Advisor, is corrupted by the nuclear lobbyists and stands to gain should capabilities be expanded.

There is no military justification for more than a hundred or so nuclear weapons (China’s nuclear policy reflects this – they are capable of building many thousands, but maintain only a fraction of this number). An arsenal of a hundred warheads is an arsenal that cannot destroy life on planet Earth. If these are on submarines they are difficult to target. Yet perversely we sustain thousands of weapons, at great risk to our own future.

The lobbying for large nuclear arsenals must stop. The political rhetoric that this is for our own safety and defence must stop. The drive for profit above all else must stop. Our children’s future depends on it.

Growing the AI talent pool: We need deep learning

The AI Forum NZ recently kicked-off six working groups to investigate a range of emerging issues around artificial intelligence and society.

Working group #5 has its focus on Growing the AI Talent Pool.

New Zealand is facing a shortage of technical workers capable of developing AI applications. In what follows I argue that ‘growing’ is the right metaphor to apply to responsibly solving this problem in the long term.

We will clearly need to increase the size of the available talent pool. That will be a multifactorial task that includes increasing the numbers of people choosing AI, data science and machine learning as a career; increasing the throughput of formal learning institutions; increasing the availability and uptake of on-the-job and mid career training; and increasing the supply of talent from outside of New Zealand.

However, having an ideal talent pool is not merely about the numbers, it is also about ensuring that the talent we grow is the right kind of talent, with the right traits and characteristics to best to enable a prosperous, inclusive and thriving future New Zealand. This means developing skills that go beyond technical capability. It also means ensuring that non-technical specialists understand machine learning and the capabilities of AI in order to make optimal and ethical use of it.

Impacting society at scale

With any technology that affects society at scale (as AI can clearly do) we have an obligation to develop it responsibly. In the past the industrial revolution was poorly managed resulting in exploitation of factory labour. Technological innovation in the Twentieth Century began the catastrophe of atmospheric pollution. More recently, we can note that:

“In the past, our society has allowed new technologies to diffuse widely without adequate ethical guidance or public regulation. As revelations about the misuse of social media proliferate, it has become apparent that the consequences of this neglect are anything but benign. If the private and public sectors can work together, each making its own contribution to an ethically aware system of regulation for AI, we have an opportunity to avoid past mistakes and build a better future.” – William A. Galston (Ezra K. Zilkha Chair in Governance Studies)

When the things we do and create impact society at scale there is a responsibility to get it right. That’s why we have professional certifications and a structured programme of non-technical knowledge and skill learning embedded in courses such as engineering, medicine, and law. Take for example the University of Auckland’s Medical Humanities programme and compare that to the course list for University of Otago’s computer science department, where only one paper mentions ethics as a designated component.

AI talent origin stories

Furthermore, machine learning practitioners and AI developers do not come from any one particular development pipeline. You do not need to specifically have a PhD in AI to fill these roles. AI practitioners can come from any mathematically rigorous training programme. Graduates in computer science, math, physics, finance, logic, engineering, and so on, often transition to AI and machine learning.

One glaring issue is that some of these generalist disciplines do not have a programme of social responsibility and professional ethics embedded in them (engineering may be an exception). Nor are there professional certification requirements for a lot of these skilled workers. This is in stark contrast to other professional disciplines such as accounting, law, nursing, teaching, medicine, and many others.

Social responsibility and professional ethics

To ensure responsible development of the developers we either need to embed responsibility development in all these programmes that can lead to AI practice, or take the whole thing a step further back to high school, or, stepping vertically, we need to ensure institutional codes and professional regulation. Probably all these are required.

Society expects the developers of intelligence to respect public institutions, privacy, and people as autonomous agents among many other things. We do not want to be phished for phools for profit or to further an agenda. Just because something affecting society is possible does not mean it is automatically acceptable.

Just like medical writers sign up to a code of medical writing ethics to push back and rein in the whims of Big Pharma (who employ most of them), we need to have faith in the talent pool who will be developing AI if it affects us all.

The problem may not be so great when workers are employed by businesses that are ethical, socially responsible, and whose aims are aligned with those of societal flourishing. It can be argued that several of the big tech firms are moving in this direction. IBM, Google and Microsoft, for example, have published ethical and/or social codes for development of AI in 2018. But not all developers will migrate from their technical training into socially responsible firms.

IBM’s Everyday Ethics for AI report notes the following:  “Nearly 50% of the surveyed developers believe that the humans creating AI should be responsible for considering the ramifications of the technology. Not the bosses. Not the middle managers. The coders.” Mark Wilson, Fast Company on Stack Overflow’s Developer Survey Results 2018

Growing true AI talent through deep learning

Growing the talent pool is an appropriate metaphor. We do not just want a wider harvest of inadequate talent, nor do we merely want the planting of many more average seeds. We also need to choose the right educational soil and to add the right fertilizer of ideas, concepts and socially responsible skills.

Intervention is needed at three levels and across three time horizons. We need broad social, ethical, civics, and society education prior to the choice of a career specialization.

We need to cross-fertilize tertiary training in all disciplines that lead into AI practice with courses and dialogue on social responsibility, human flourishing, ethics, law, democracy and rights. And we need to ensure that professional engineering, AI and machine learning institutions mandate adherence to appropriate codes of conduct.

We need deep learning around all these issues from early on.

We need to begin now with current practitioners, we need to foster these ideals in those who have selected AI as a career, and we need to prepare the future generation of AI talent.

If the tech specialists don’t see the force and necessity of these points, then that in itself proves the truth of the argument.

Who is responsible?

Here I am with no background in AI or machine learning telling those who would make a career in these fields that they must study soft skills too. So why do all our voices count in this space?

We are talking about the applications of intelligence, and as intelligent beings we are all qualified to talk about how intelligence is distributed in society, how it is deployed and what functions it has.

When you go to a conference on nuclear physics everyone at the conference may be a nuclear physicist. But those that develop a technology are not automatically those who get to decide how we use it.

We see this when policy makers, ethicists and the public determine whether those nuclear physicists are permitted to detonate atomic weapons. We see this when the committees at the FDA determine whether medical technologies and pharmaceuticals will be licensed for use in society. AI and machine learning applications bear much in common with these other domains.

With great intelligence (and the development of great intelligence) comes great responsibility.

Importance and urgency

The reason all this is important is because digital technology now infuses every domain in society, and AI is rapidly becoming an integral part of Law, Medicine, Engineering, and every other professional discipline. We are going to need professionals who understand AI, but we are also going to need technical developers who understand the professional aspects.

There are tasks in society that are urgent and those that are important. There are interventions that will have a narrow impact and those that will have a wide ranging impact. In addressing those issues that are urgent and narrow (and therefore manageable) we cannot forget the issues that are ongoing and less well-defined, but highly impactful.

The most important things moving forward are to ensure a just and cohesive society that supports democratic institutions and upholds social norms and rights; a society that does not use exploitation or manipulation as key processes for generating profit. A society in which technological innovation respects the evolution of institutions.

We must ensure that as a society we develop a pool of talented, socially aware, and responsible AI practitioners.

Efficient Cancer Care in New Zealand: Lessons from five years of Australian research literature

stethoscope

The cost of cancer care is rising and a review of the research literature on cancer care in Australia can teach many lessons to us in New Zealand.

In Australia real costs for cancer per person (adjusting for inflation) have more than doubled in the last 25 years. The drivers are multifactorial but due in part to upward trends in diagnosis (often the result of new diagnostic methods and screening programmes), the rising cost of cancer pharmaceuticals, and increasing expectations.

The largest costs are treatment costs. Taking Australia as an example, hospital services, including day admitted patients (usually for chemotherapy), account for 79% of cancer costs. The number of approved cancer medicines has doubled since 2013.

Rising costs in health care are not sustainable. We need better efficiency.

Efficiency in health is about making choices that maximise the health outcomes gained from the resources allocated. And it seems like there are a number of different ways that we could target the cancer care pathway to improve efficiency. However, this can only work if the entire care path is looked at as a whole, and the notions of funding silos are dispensed with.

Prevention

For example, healthy lifestyle and regular screening could prevent an estimated one third to one half of all cancers, but presently, only single-figure percentages of cancer funding target prevention.

This is despite the modelled return on investment for cancer prevention programmes, which is often $3–$4 per $1 spent. As an added bonus, cancer prevention can also reduce the burden of other diseases (e.g. reducing inactivity can also benefit diabetes and heart disease).

Screening

Participation rates in screening programmes are generally poor. For many programmes 40–60% is considered a good uptake. This is inadequate. Increasing screening rates is likely to increase the effectiveness of screening programs. And modelling suggest in some cases that sufficient uptake can lead to future cost savings.

We should also do more to ensure that patients who are up-to-date with screening are not re-screened (e.g. those who have had recent colonoscopy) and ensure that follow-up after screening is based on guidelines. It often isn’t.

Diagnosis

Over-diagnosis is becoming a problem in the cancer care path. Breast screening often reveals anomalies that are not cancer. Artificial intelligence systems used to augment physician diagnosis could curb this.

Not only is there evidence from a 2015 systematic review that prostate cancer screening is not cost effective, but prostate screening with PSA can lead to cancer diagnoses (and treatment) in men whose tumors will never cause them problems.

There has also been a rapid spike in thyroid cancer diagnoses, leading to an increase in thyroid surgeries, for example in Australia, but no corresponding change in deaths from thyroid cancer.

Reducing unnecessary detection and a conservative approach could lead to millions of dollars in savings and reduced harms to patients from over-diagnosis.

Treatment

The cost of treatment is also a problem. In Australia, cancer accounts for 6.6% of hospital costs, but the cost of cancer medication is one sixth of the total pharmaceutical budget. The 10-fold increase in cost of these medicines over 10 years is a serious threat to patients and health systems.

We could decrease the costs of cancer medications by modifying prescription habits, considering treatment costs in professional guidelines, disinvesting in medicines that have not proven cost-effective in the real world, improving patient selection, and increasing use of generics.

There is evidence of over-treatment. A watch and wait approach is appropriate for many prostate cancers in the early phase, or active surveillance of low risk patients could reduce costs and is often clinically reasonable.

We could consider pharmacist review of prescriptions to avoid the risk of adverse drug reactions (and the associated treatment costs). We could do more to ensure there are no unjustified variations in clinical practice.

Follow-up

We should ensure that patients have a written care plan and are not receiving follow-up from multiple overlapping providers. Also, follow-up should be guideline based. Some studies indicate that less than half of bowel cancer patients received guideline-based follow-up colonoscopy.

We could make more use of primary care where studies have not shown hospital follow-up to be any more effective in detecting recurrent disease.

Traditional follow-up focuses on detecting cancer recurrence, but this can fail to adequately address many survivors’ concerns. Getting back to work (and being supported to do so is important to reduce the societal costs of cancer. Occupational therapy may be important in facilitating this.

Palliation

Palliative care costs less than hospital care and is under-utilised. But to optimise the use of out-of-hospital palliative care, patients need to have accurate prognostic awareness, allowing them to make informed choices. This requires important conversations with treatment providers. Lack of a palliative care plan leads to unnecessary emergency room visits and hospital admissions that are primarily palliative.

Research

Research costs could also be streamlined. We should ensure that the cancer research being undertaken reflects the burden of cancer. Lung cancer has the greatest burden of all cancer (especially in terms of years of life lost) and yet there is far less lung cancer research than this burden demands.

Cancers including leukaemia, breast, ovary, liver and skin, often receive proportionately more funding than their disease burden. Prevention, cancer control, and survivorship research could be funded more. This is because effectiveness in these components of the care path lead to downstream cost savings and potentially increased social productivity.

Summary

Overall, it looks like prevention and early detection are generally underfunded. There is also scope to increase participation in screening programs.

The rapidly rising costs of treatment, including medications, need to be curtailed through wise practice, and new models of care, that prioritise prevention, screening, surveillance, guideline and evidence-based follow-up, return to work, and palliative care where appropriate.

Reducing the cost of medications is a high priority, with large potential cost savings. The focus should be on treatments that are proven to work well in the real world rather than on increasing use of expensive drugs with marginal benefit.

We need a long-term focus including a culture of change and workforce planning. Further efficiencies might be gained through initiatives such as: Choosing Wisely, addressing variations in process and treatment, minimising non-adherence to treatment, avoiding communication failures, ceasing ineffective interventions, coordinating care, reducing admissions, using generics, negotiating price, reducing adverse events, taking a societal perspective of costs, and considering upfront cost versus long-term impact.

Further Reading

Ananda, S., Kosmider, S., Tran, B., Field, K., Jones, I., Skinner, I., . . . Gibbs, P. (2016). The rapidly escalating cost of treating colorectal cancer in Australia. Asia-Pacific Journal of Clinical Oncology, 12(1), 33-40.

Chen, C. H., Kuo, S. C., & Tang, S. T. (2017). Current status of accurate prognostic awareness in advanced/terminally ill cancer patients: Systematic review and meta-regression analysis. Palliative Medicine, 31(5), 406-418.

Colombo, L. R. P., Aguiar, P. M., Lima, T. M., & Storpirtis, S. (2017). The effects of pharmacist interventions on adult outpatients with cancer: A systematic review. Journal of Clinical Pharmacy and Therapeutics, 42(4), 414-424.

Cronin, P., Kirkbride, B., Bang, A., Parkinson, B., Smith, D., & Haywood, P. (2017). Long-term health care costs for patients with prostate cancer: a population-wide longitudinal study in New South Wales, Australia. Asia-Pacific Journal of Clinical Oncology, 13(3), 160-171.

Doran, C. M., Ling, R., Byrnes, J., Crane, M., Shakeshaft, A. P., Searles, A., & Perez, D. (2016). Benefit cost analysis of three skin cancer public education mass-media campaigns implemented in New South Wales, Australia. PLoS ONE, 11 (1).

Furuya-Kanamori, L., Sedrakyan, A., Onitilo, A. A., Bagheri, N., Glasziou, P., & Doi, S. A. R. (2018). Differentiated thyroid cancer: Millions spent with no tangible gain? Endocrine-Related Cancer, 25(1), 51-57.

Gordon, L. G., Tuffaha, H. W., James, R., Keller, A. T., Lowe, A., Scuffham, P. A., & Gardiner, R. A. (2018). Estimating the healthcare costs of treating prostate cancer in Australia: A Markov modelling analysis. Urologic Oncology: Seminars and Original Investigations, 36(3), 91.e97-91.e15.

Jefford, M., Rowland, J., Grunfeld, E., Richards, M., Maher, J., & Glaser, A. (2013). Implementing improved post-treatment care for cancer survivors in England, with reflections from Australia, Canada and the USA. British Journal of Cancer, 108(1), 14-20.

Langton, J. M., Blanch, B., Drew, A. K., Haas, M., Ingham, J. M., & Pearson, S.-A. (2014). Retrospective studies of end-of-life resource utilization and costs in cancer care using health administrative data: A systematic review. Palliative Medicine, 28(10), 1167-1196. doi:10.1177/0269216314533813

Lao, C., Brown, C., Rouse, P., Edlin, R., & Lawrenson, R. (2015). Economic evaluation of prostate cancer screening: A systematic review. Future Oncology, 11(3), 467-477.

Leggett, B. A., & Hewett, D. G. (2015). Colorectal cancer screening. Internal Medicine Journal, 45(1), 6-15.

Ma, C. K. K., Danta, M., Day, R., & Ma, D. D. F. (2018). Dealing with the spiralling price of medicines: issues and solutions. Internal Medicine Journal, 48(1), 16-24.

MacLeod, T. E., Harris, A. H., & Mahal, A. (2016). Stated and Revealed Preferences for Funding New High-Cost Cancer Drugs: A Critical Review of the Evidence from Patients, the Public and Payers. The Patient, 9(3), 201-222. doi:10.1007/s40271-015-0139-7.

Olver, I. (2018). Bowel cancer screening for women at midlife. Climacteric, 21(3), 243-248.

Shih, S. T., Carter, R., Heward, S., & Sinclair, C. (2017). Economic evaluation of future skin cancer prevention in Australia. Preventive Medicine, 99, 7-12.

Wait, S., Han, D., Muthu, V., Oliver, K., Chrostowski, S., Florindi, F., & de Lorenzo, F. (2017). Towards sustainable cancer care: Reducing inefficiencies, improving outcomes—A policy report from the All.Can initiative. Journal of Cancer Policy, 13, 47-64.

Youl, P., Baade, P., & Meng, X. (2012). Impact of prevention on future cancer incidence in Australia. Cancer Forum, 36(1).

Economic evidence for closing border in a severe pandemic: now we need the values discussion

Image result for bioweapons

From the University of Otago Press Release about our latest study:

Closing the border may make sense for New Zealand in some extreme pandemic situations, according to a newly published study of the costs and benefits of taking this step.

“In a severe pandemic, timely/effective border closure could save tens of thousands of New Zealand lives far outweighing the disruptions to the economy and the temporary end to tourism from international travellers,” says one of the authors Professor Nick Wilson from the University of Otago, Wellington.

“This finding is consistent with work that we published last year – except our new study used a more sophisticated model developed by the New Zealand Treasury for performing cost-benefit analyses,” says Professor Wilson.

The research has just been published in the Australian and New Zealand Journal of Public Health.

Another of the authors, Dr Matt Boyd comments that with increasing risks of new pandemics due to the growing density of human populations and various socio-economic, environmental and ecological factors, there is a need to look at different scenarios for better pandemic planning.

Read the full press release here

The study is published here

What does taking an ethical approach to digital media and artificial intelligence mean?

Image result for digital ethics

Several recent reports have argued that we need to take ‘an ethical approach’ when designing and using digital technologies, including artificial intelligence.

Recent global events such as various Facebook, fake news, and Cambridge Analytica scandals appear to emphasize this need.

But what does taking an ethical approach entail? Here are a few ideas I’ve spent Sunday morning thinking about.

Most importantly, there is no one thing that we can do to ensure that we design and use digital technology ethically. We need to start embarking on a multifaceted approach that ensures we act as we ought to moving forward.

There is onus upon governments, developers, content creators, users, educators and society generally. We need to ensure that we act ethically, and also that we can spot unethical actors. This requires a degree of ethical literacy, information and media literacy, and structuring our world in such a way that the right thing is easier to do.

This will involve a mix of education, regulation, praise and condemnation, and civility.

Truth and deception

Generally we accept that true information is good and false information is bad. We live our lives under the assumption that most of what other people say to us is true. If most of what people say to each other were not true then communication would be useless. The underlying issue is one of trust. A society that lacks trust is dysfunctional. In many cases the intent behind falsehoods is to deceive or manipulate.

A little reflection shows that manipulation does not adhere with our values of democracy, autonomy, freedom, and self-determination. And it is these values (or others like them, which we can generally agree upon) that need to underpin our decisions about digital technology. If the technology, or the specific implementation of it, undermines our values then it ought to be condemned. Condemnation is often enough to cause a back-track, but where condemnation does not work, we need to regulate.

Misinformation and fake news are the current adaptive problem in our society, and we need to start developing adaptations to this challenge and driving these adaptations into our cultural suite of curriculum and norms.

Human beings have spent hundreds of thousands of years evolving psychological defenses against lies and manipulation when in close contact with other humans in small societies. We are very good at telling when we’re being deceived or manipulated in this context. However, many of the psychological and digital techniques used to spread fake news, sustain echo chambers, and coerce users are new and we do not yet have an innate suite of defenses. We need to decide whether it is unfair for governments, platforms, advertisers, and propagandists to use techniques we are not psychologically prepared for. We need to decide collectively if this kind of content is OK or needs to be condemned.

A values-based approach

Agreeing upon our values can be tricky, as political debates highlight. However, there is common ground, for example all sides of the political spectrum can usually agree that democracy is something worth valuing. We ought to have ongoing discussions that continually define and refine our values as a collective society.

It is not merely a case of ‘well that’s just the way it is’, we have the power to decide what is OK and what is not, but that depends on our values. Collective values can only be determined through in-depth discussion. We need community meetings, hui, focus groups, citizen juries, and surveys. We need to build up a picture of the foundations for an ethical approach.

Many of the concerns around digital media and AI algorithms are not new problems, they are existing threats re-packaged. Threats such as coercion and manipulation, hate and prejudice. Confronted with the task of designing intelligence, we are starting to say, ‘this application looks vulnerable to bias, or that application seems to undermine democracy…’

With great intelligence comes great responsibility

It’s not really AI or digital tech per se that we need to be ethical about, AI is just the implementation of intelligence, what we need to be ethical about is what we use intelligence for, and the intentions of those behind the deployment of intelligent methods.

We need to reflect on just what intelligence ought to be used for. And if we remember that we are intelligent systems ourselves, we ought to also reflect upon the moral nature of our own actions, individually and as a society. If we wouldn’t want an algorithm or AI to act in a certain way because it is biased or exploitative or enhances selfish interests, spreads falsehoods, or unduly concentrates power, should we be acting in that way ourselves in our day to day lives? Digital ethics starts with ethical people.

Developing digitally ethical developers

Many behaviours that were once acceptable have become unacceptable over time, instances of this are sometimes seen as ethical progress. It is possible that future generations will approach digital technologies in more inclusive and constructive ways than we’re seeing at present. In order to ensure that future digital technologies are developed ethically, we need to take a ground up approach.

School curricula need to include lessons to empower the next generation with an understanding of math and logic (so that the systems can be appreciated), language (to articulate concern constructively), ethical reasoning (not by decreeing morality, but by giving students the tools to reason in ethical terms), information literacy (this means understanding the psychological, network, and population forces that drive which information thrives and why), and epistemology (how to determine what is fact and what is not and why this matters). With this suite of cognitive tools, future developers will be more likely to make ethical decisions.

Ethical standards

As well as ensuring we bring up ethical developers, we need to make sure that the rules for development are ethical. This means producing a set of engineering standards surrounding the social impact of digital technologies. Much work has been published on the ethics of techniques like nudging and we need to distil this body of literature into guidance for acceptable designs. It may be the case that we need to certify developers who have signed up to such guidance or mandate such agreement as with other professional bodies.

As we build our world we build our ethics

The way we structure our world has impacts on how we behave and what information we access. Leaving your keys by the door reduces your chance or forgetting them, and building algorithms that reward engagement increases the chance of echo chambers and aggression.

We need to structure our world for civility by modeling and rewarding civil digital behavior. Mechanisms for community condemnation, down-voting, and algorithms that limit the way that digital conflict can spread may all be part of the solution. Sure, you can say what you want, but it shouldn’t be actively promoted unless it’s true (and evidenced) and decent.

We know from research that uncivil digital interactions decrease trust and this could undermine the values we hold as a society. Similarly we know that diverse groups make the best decisions, so platforms shouldn’t isolate groups of echo chambers. An ethical approach would ensure diverse opinions are heard by all.

Finally, from the top down

The relationship between legislation and norms is one of chicken and egg. Just as upholding norms can drive legislation, legislating can also drive norms.

It might be useful to have regulatory bodies such as ethical oversight committees, just as we have for medical research (another technological domain with the potential to impact the wellbeing of many people). Ethics committees can evaluate proposals or implemented technology and adjudicate on changes or modifications required for the technology to be ethically acceptable. Perhaps society could refer questionable technologies to these committee for evaluation, and problematic designs are sent for investigation. Our engineering standards, and our collectively agreed values, and tests of any intent to exploit and so on can then be applied.

Often an ethical approach means applying the ethics that we have built up over decades to a new domain, rather than dreaming up new ethics on the spot. It probably ought to be the case that we apply existing frameworks such as broadcasting and advertising standards, content restriction, and so on, to novel means of information distribution. Digital technologies should not undermine any rights that we have agreed in other domains.

False or unsupported claims are not permitted in advertising because we protect the right of people to not be misled and exploited. As a society we ought to condemn unsupported and false claims in other domains too, because of the risk of exploitation and manipulation for the gain of special interest groups.

The take home

Digital ethics (especially digital media) should be about ensuring that technology does not facilitate exploitation, deception, division, or undue concentration of power. Our digital ethics should protect existing rights and ensure that wellbeing is not impinged. To ensure this we need to take a multi-faceted approach with a long-term view. Through top-down, bottom-up and world structuring approaches, little by little we can move into an ethical digital world.

Do we care about future people?

dead planet

We have just published an article (free online) on existential risks – with a NZ perspective.1 A blog version is hosted by SciBlogs. What follows is the introduction to that blog:

Do we value future people?

Do we care about the wellbeing of people who don’t yet exist? Do we care whether the life-years of our great-grandchildren are filled with happiness rather than misery? Do we care about the future life-years of people alive now?

We are assuming you may answer “yes” in general terms, but in what way do we care?

You might merely think, ‘It’d be nice if they are happy and flourish’, or you may have stronger feelings such as, ‘they have as much right as me to at least the same kind of wellbeing that I’ve had’. The point is that the question can be answered in different ways.

All this is important because future people, and the future life-years of people living now, face serious existential threats. Existential threats are those that quite literally threaten our existence. These include runaway climate change, ecosystem destruction and starvation, nuclear war, deadly bioweapons2, asteroid impacts, or artificial intelligence (AI) run amok3 to name just a few…

Click here to read the full blog.

A response to the AI Forum NZ’s ‘Shaping a Future New Zealand’ report

puppet_original_9439

Last week the AI forum New Zealand released its report ‘Artificial Intelligence: Shaping a future New Zealand’. I wish to commend the authors for an excellent piece of horizon scanning, which lays the foundation for a much-needed ongoing discussion about AI and New Zealand, because, like the Wild West there is much as yet unknown regarding AI. Microsoft was at pains to point this out in their ‘The Future Computed’ report published earlier this year. In my reply I comment on some of the content of the AI Forum NZ’s report and also try to progress the discussion by highlighting areas that warrant further analysis. Like all futurism we can find the good the bad and the ugly within the report. Click here for a PDF of my full comments below.

The Good

The report has done a thorough job of highlighting many of the opportunities and challenges that face us all in the coming years. It is a necessary and very readable roadmap for how we might approach the issue of AI and New Zealand society. The fact the report is so accessible will no doubt be a catalyst to meaningful debate.

It was good to see insightful comments from Barrie Sheers (Managing Director, Microsoft NZ) at the outset, which set the tone for what at times was (necessarily) a whistle-stop tour of the web of issues AI poses. Barrie’s comments were nuanced and important, noting that those who design these technologies are not necessarily those who ought to decide how we use them. This is a key concept, which I will expand on below.

The report is generally upbeat about the potential of AI and gives us many interesting case studies. However, the ‘likely’, ‘many’, benefits of AI certainly do not give us carte blanche to pursue (or approve) any and all applications. We need a measured (though somewhat urgent) approach. Similarly, there is omission of some of the key threats that AI poses. For example, AI is suggested as a solution to problem gambling (p. 76), yet AI can also be used to track and persuade problem gamblers online, luring them back to gambling sites. For every potential benefit there is a flip side. AI is a tool for augmenting human ingenuity, and we must constantly be aware of the ways it could augment nefarious intentions.

It was good to see the report highlight the threat of autonomous weapons and the fact that New Zealand still has no clear position on this. We need to campaign forcefully against such weapons as we did with the issue of nuclear weapons. The reason for this is that in 2010 financial algorithms caused a $1 trillion dollar flash crash of the US stock market. Subsequent analysis has not satisfactorily revealed the reason for this anomaly. A ‘flash crash’ event involving autonomous weapons is not something we could simply trade out of a few minutes later.

The issue of risk and response lies at the heart of any thinking about the future of AI. One of the six key recommendation themes in the report centers on ‘Law, Ethics and Society’. There is a recommendation to institute an AI Ethics and Society Working Group. This is absolutely critical, and its terms of reference need to provide for a body that persists in its place for the foreseeable future. This working group needs to be tasked with establishing our values as a society, and these values need to shape the emergence of AI. Society as a whole needs to decide how we use AI tools and what constraints we place on development.

Ultimately, there probably ought to be a Committee for AI Monitoring, which distills evidence and research emerging locally and from around the world to quickly identify key milestones in AI development, and applications that pose a potential threat to the values of New Zealanders. This Committee probably ought to be independent of the Tech Industry, given Barrie Sheers comments above. Such a Committee would act as an ongoing AI fire alarm, a critical piece of infrastructure in the safe development of AI, as I discuss further below.

The Bad

Before I begin with the bad, I am at pains to emphasise that ‘Shaping a Future New Zealand’ is an excellent report, which covers a vast array of concepts and ideas, posing many important questions for debate. It is the quality of the report that draws me to respond and engage to further this important debate.

A key question this report poses is whether we will shape or be shaped by the emergence of AI. A key phrase that appears repeatedly in the document is ‘an ethical approach’. These two ideas together make me think that the order of material in the report is backwards in an important way. Re-reading Microsoft’s ‘The Future Computed’ report yesterday made me certain of this.

It may seem trivial, but in the AI Forum’s report, the section on ‘AI and the Economy’ precedes the section on ‘AI and Society’. This is to put the cart before the horse. Society gets to decide what we value economically, and also gets to decide what economic benefits we are willing to forgo in order to protect core values. We (society) get to shape the future, if we are willing and engaged. It is the societal and moral dimension of this issue that can determine what happens with AI and the economy. If we want to ‘shape’ rather than ‘be shaped’ then this is the message we need to be pushing. For this reason I think it is a mistake to give AI and the Economy precedence in the text.

A feature of the writing in this report is the abundance of definite constructions. These are constructions of the form ‘in the future X will be the case’. This is perhaps dangerous territory when we are predicting a dynamic, exponential system. Looking to Microsoft’s approach the phrase ‘no crystal ball’ stands out instead.

I’ll digress briefly to explain why this point is so critical. Rapidly developing systems change dramatically in ways that it is not easy for our psychology to grasp. Say you have a jar containing a bacterium (let the bacterium represent technical advances in AI, or the degree to which AI permeates every aspect of our world, or the number of malicious uses of AI, or some such thing). If the bacteria doubles in number every minute, and fills the jar after an hour, then by the time the jar is a quarter full (you’re really starting to notice it now, and perhaps are predicting what might happen in the future) you only have 2 minutes left to find another jar, and 2 minutes after that you’ll need 3 more jars. In the classic Hanson-Yudowsky debate about the pace of AI advance, what I’ve just illustrated represents the ‘AI-FOOM’ (rapid intelligence explosion) position. This is a live possibility. The future could still look very different from any or all of our models and predictions.

Furthermore, a disproportionate portion of the AI and the Economy section focuses on the issue of mass unemployment. This is the ‘robots will take our jobs’ debate. The argument here is protracted, far more detailed than any other argument in the document, and the conclusion is very strong. I think this is a mistake. Straining through models and analyses of spurious accuracy to reach an unambiguous conclusion that ‘AI will not lead to mass unemployment’ appears to be predetermined. The length of the reasoning (certainly compared to all other sections) conveys the illusion of certainty.

But we’re talking here about a tremendous number of unknowns. Including very many of Donald Rumsfeld’s infamous ‘unknown unknowns’, the things we don’t even know we don’t know yet. The modeling projects 20 years through this indeterminate melee and it is hard to accept such a definite conclusion (I know as much from looking at what past labour market models have predicted and what actually transpired). Prediction is hard, especially about the future. This is why trader, risk analyst and statistician Nassim Taleb encourages us to anticipate anything. The history of the world is shaped by Black Swans. These are unpredictable events that we rationalize after the fact, but which change everything. The only response to such uncertainty is to build resilience.

I’m not saying that there will be mass unemployment, I’m saying that trying to prove one way or the other is a risky approach. What I am saying is that the conclusion is misplaced, as risk analysts we ought not burn bridges like this. Let’s call a spade a spade. To me the argument in ‘Shaping a Future New Zelaand’ appears to be a rhetorical device put forward by those who don’t want us to contemplate massive labour force disruption. If people are afraid for their jobs, they are less likely to authorize AI (and given the moral precedence of society over economy authorize is the correct term).

But to take this argument even further, what is the reason that we fear mass unemployment? It’s not because of mass unemployment per se, it’s because unemployment can deny people meaningful activity in their life, and it can also cause economic pain. However, mass unemployment is only one way to cause these things. We should also be considering, for example, other ways that AI might deny us meaningful activity (with mass automation of decisions) or cause economic harm (through financial market collapse following an algorithmic mishap – financial error or financial terror) and so on. Mass unemployment is a side-show to the real discussion around value, meaning and risk that we need to be having.

By concluding that there is no risk, nothing to worry about, we risk being caught off-guard. A safer conclusion, and one that provides in fact much more security for everyone, is one that is reached without analysis. Maybe AI leads to mass unemployment, maybe it doesn’t. The problem is that if we don’t plan for what to do in the event, then we have built a fragile system (to use Taleb’s term).

By accepting at least the possibility of mass unemployment, we can invest in resilience measures, pre-empt any crisis, and plan to cope. We put that plan into action if and when the triggering events transpire. What we need is an insurance policy, not to hide our head in the sand. What we need is a fire alarm. That would be the way to allay fears. That would be how to ensure the system is antifragile.

Given the pace of AI innovation and surprising advances, we don’t know how New Zealand will be affected by AI, but we can control what we are comfortable permitting. This is why Society must precede Economy.

In fact this has been a weakness of much contemporary political reasoning. Problems are tackled on an ad hoc basis, to determine how they might economically benefit us. What is lacking is a set of overarching values that we hold as a society and that we apply to each problem to determine how we must respond (whether or not it accords with our best economic interests). Max Harris tackles this issue in his recent book ‘The New Zealand Project’.

So I return to the phrase, ‘an ethical approach’ which is the main theme of this report that needs unpacking. We need to decide as a society what our ethical approach is. We need a framework, which will determine whether each instance of AI is good, bad or ugly.

I’ll turn to a concrete example. If I’m being critical (which I am in the interests of really pushing this debate deeper) there are some important omissions from the report.

Notably, very little mention is made of the advertising and communications industry. This is surprising given recent developments with fake news, the Cambridge Analytica saga and the associated Facebook debacle. All of which are merely the tip of the iceberg of an industry that has already taken advantage of the fact that the public is generally ill-informed about the uses and possibilities of AI. Marketing is turning into manipulation. Attempts are being made to manipulate citizens to behave in ways that exploit them.

It’s debatable to what degree these techniques have succeeded to date, but remember that bacteria has only been growing in the jar for 58 minutes so far, so the tools are rudimentary (to stick with our analogy, the tools employed by Cambridge Analytica were only one quarter effective, in 4 minutes we face tools with eight times that effect! – look at AlphaGo Zero and think about how the relatively rule-based human social network game might be learned, and what the intentions might be of those who control that technology)

The point is that we are facing a situation where we humans, who possess a psychology riddled with unconscious heuristics and biases, and are simply not rational, no matter how much we rationalize our behavior, are faced with AI systems that on the one hand are dreadfully incompetent compared to ourselves, and yet on the other hand have immense knowledge of us and our tendencies. This latter feature means there is potential for a power imbalance in these interactions and we are the victims. This is the fundamental premise of the industry of nudging. Which when deployed with less than altruistic goals we can plainly call manipulation.

The AI Forum report contains very little on manipulation and disinformation by AI, or the potential horror scenarios of AI impersonating (convincingly) powerful human individuals. We are going to need to solve the problem of trust and authenticity very quickly, and more importantly, to start to condemn attempts to impersonate and mislead.

We need more discussion about the degree to which we ought to require AI systems with which we interact to disclose their goals to us. Is this system’s goal to make me buy a product? To stop me changing banks? To make me vote for Trump? To maximize the amount I spend online gambling? Perhaps we need regulation that makes AI developers ensure that AIs must declare that they are AIs.

The reason for this is because humans have evolved a very effective suite of defenses against being swindled by humans, but we are unfamiliar with the emerging techniques of AI. Unlike when I deal with a human, I’m unfamiliar with the knowledge and techniques of my potential manipulator. Private interests are going to flock to manipulation tools that allow them to further their interests.

There is one line in the report addressing this issue of manipulation by AI, but it is an important line. The Institute of Electrical and Electronics Engineers is in the process of drafting an engineering standard about ethical nudging. This certainly gets to the heart of this issue, but it remains to be seen what that standard is, what kinds of systems it covers, and who will adopt it. We could have done with such a standard before Cambridge Analytica, but we still need ways to make businesses adhere to it. New Zealand needs to be having values-based discussions about this kind of thing, and we need to be monitoring overseas developments so that we have a say, and do not get dragged along by someone else’s standards.

The Ugly

The report does a good job of laying out the strategies other nations are employing to maximize the probability of good AI outcomes. These case studies certainly make New Zealand look late to the party. However, there is no discussion of what is ultimately needed, which is a global body. We need an internationally coordinated strategy of risk management. This will be essential if nations do not want to be at the receiving end of AI use that they do not condone themselves. This is a coordination problem. We need to approach this from a values and rights perspective, and New Zealand has some successful history of lobbying the globe on issues like this.

The report highlights some potential threats to society, such as bias, transparency, and accountability issues. However, there are many further risks such as those that exploit surveillance capitalism, or threaten autonomy. Given that there are potential looming threats from AI, to individuals open to exploitation, to democratic elections from attempts at societal manipulation, to personal safety from autonomous agents, and so on, what we need is more than just a working group. It is very apparent that we need an AI fire alarm.

Even if we manage to approach AI development ‘in an ethical way’ (there’s that phrase again) and ensure that no one should design AI that seeks to exploit, manipulate, harm or create chaos, we will need to be able to spot such malicious, and quite probably unexpected acts before they cause damage. Furthermore, many private entities are more concerned with whether their behavior is legal rather than ethical. The difference is substantial. This is why we need a Committee for Monitoring AI. I’ll explain.

Fire is a useful technology with many societal and economic benefits, but it can go wrong. Humans have been worrying about these side-effects of technology since the first cooking fire got out of control and burned down the forest.

Eliezer Yudowsky has written a powerful piece about warning systems and their relevance to AI. Basically he notes that fire alarms don’t tell you when there is a fire (this is because most times they ring there is no fire). But conversely the sight of smoke doesn’t make you leap into action. This is especially true if you are a bystander in a crowd (perhaps it’s just someone burning the toast? Surely someone else will act, and so on). What fire alarms do is they give you permission to act. If the alarm sounds, it’s OK to leave the building. It’s OK to get the extinguisher. You’re not going to look silly. The proposed AI Ethics and Society working group, and my suggested Committee for Monitoring AI ought to act as fire alarms.

Perhaps a system of risk levels is needed that account for the scale of the particular AI risk, its imminence, and the potential impact; a colour-coded system to issue warnings. Importantly, this needs to work at a global not just local level due to the threat from outside and the lack of national boundaries for many AI applications. Our global interactions around AI need to extend beyond learning from foreign organisations and sharing gizmos.

Overall, we need to shift the focus around AI innovation from one of rapid development to market, to one concerned with risk and reliability. AI as a technology has more in common with anaesthesia or aviation than with sports shoes or efficient light bulbs. Like aviation, we need to ensure high-reliability AI infrastructure when AI is at the helm of logistics and food supply, power grids, self-driving cars and so on. We need redundancy, and I’m not confident this will be implemented especially given the single point of failure systems we still have commanding our telecommunications network in New Zealand. A human factors, safety systems engineering approach is needed, and this will require large changes to computer science and innovation training.

Conclusions

The AI Forum New Zealand is to be commended for a detailed yet accessible report on the state of play of AI in New Zealand. These are exciting times. Overall the urgency with which this report insists we must act is absolutely correct.

The Recommendations section begins, ‘Overall, the AI Forum’s aim is for New Zealand to foster an environment where AI delivers inclusive benefits for the entire country’. This must be the case. We just need to work hard to make it happen. The best way to ensure inclusive benefits is to settle on a value framework, which will enable us to unpack the elusive ‘ethical approach’. By running each possibility through our values we can decide quite simply whether to encourage or condemn the application.

Like any tool, AI can be used for good or for bad, and no doubt most applications will simply be ugly. The report claims that some of the important potential harms, for example criminal manipulation of society, are as yet ‘unquantified’. Well, it is not only criminals that seek to manipulate society, and to be honest, I’m not one for waiting around until harmful activity is quantified.

We need to decide what is OK and what is not, anticipating what might be coming. As the report indicates, this will require ethical and legal thinking, but also sociological, philosophical, and psychological. I would argue that a substantial portion of the Government’s Strategic Science Investment Fund be dedicated to facilitating these critical allied discussions and outputs.

Most of all we need to design for democracy and build an antifragile New Zealand. As a Society we must indeed work to shape the future. What values are we willing to fight for and what are we willing to sell-out?

Can Siri help you quit smoking?

iphone-hey-siri-670x335

So you want to quit smoking. But you want to do it right, with expert advice and evidence-based information. Should you ask Siri?

This week my co-author Nick Wilson and I published results of a pilot study reporting how effective personal digital assistants are at providing information or advice to help you quit smoking.

As far as we are aware our study is the first study looking at whether Siri or Google Assistant can help you quit.

The internet is widely used for obtaining health-related information and advice. For example, in the United Kingdom, 41% of internet users report going online to find information for health-related issues, with about half of these (22% of all users) having done so in the previous week.

We compared voice-activated internet searches by smartphone (two digital assistants) with laptop ones for information and advice related to smoking cessation.

We asked Siri and Google Assistant three sets of questions. We entered the same questions into Google as an internet search on laptops.

The first set of questions were adapted from the ‘frequently asked questions’ on the UK National Health Service (NHS) smokefree website.

The next set of questions were related to short videos on smoking-related disease produced by the Centers for Disease Control and Prevention (CDC) in the USA.

The final set of questions we devised to test responses to a range of features such as, finding smoking-related pictures, diagrams, instructional videos; and navigating to the nearest service/retailer for quitting-related products.

We graded the quality of the information and advice using a three tier system (A,B,C) where A represented health agencies which had significant medical expertise, B was for sites with some expertise (e.g. Wikipedia) and C was for news items, or magazine style content.

Google laptop internet searches gave the best quality smoking cessation advice 83% of the time, with Google Assistant on 76% and Siri 28% (equal firsts were possible).

The best search results by any device used expert (grade ‘A’) sources 59% of the time. Using all three methods failed to find relevant information 8% of the time, with Siri failing 53% of the time.

We found that Siri was poor when videos were requested according to the content the might contain, all three tools sometimes returned magazine or blog content instead of professional health advice, and we found that all tools had trouble when gay and lesbian-specific information was requested.

A weakness of our small pilot study was that we only considered the first result returned in each search.

Overall, while expert content was returned over half the time, there is clearly room for improvement in how these software systems deliver smoking cessation advice. We would encourage software firms to work with professional health organisations on ways to enhance the quality of smoking cessation advice returned.

See Adapt Research Ltd’s related blog: ‘To vape or not to vape… is not the right question

Health inequalities in NZ

inequality

Everyone knows that socio-economic inequalities in health exist – in recent times. But one thing we do not know is whether they have always been there. Adapt Research Ltd contributed to a just published study that looks at two historical datasets – with one of these suggesting life span differences by occupational class as measured 100 years ago.

The study found strong differences in life expectancy by occupational class among men enlisted to fight in the First World War (but not actually getting to the frontline). Whilst not definitive evidence (it is hard to get perfect evidence from 100 years ago!), it does suggest that socio-economic inequalities in mortality have existed for at least 100 years in NZ.

In this blog we also take the opportunity to discuss what might be done to address the current inequality problem in this country, this is especially relevant given the Tax Review currently underway… Click here to read the full blog (hosted externally).