Ideas Blog

What does taking an ethical approach to digital media and artificial intelligence mean?

Image result for digital ethics

Several recent reports have argued that we need to take ‘an ethical approach’ when designing and using digital technologies, including artificial intelligence.

Recent global events such as various Facebook, fake news, and Cambridge Analytica scandals appear to emphasize this need.

But what does taking an ethical approach entail? Here are a few ideas I’ve spent Sunday morning thinking about.

Most importantly, there is no one thing that we can do to ensure that we design and use digital technology ethically. We need to start embarking on a multifaceted approach that ensures we act as we ought to moving forward.

There is onus upon governments, developers, content creators, users, educators and society generally. We need to ensure that we act ethically, and also that we can spot unethical actors. This requires a degree of ethical literacy, information and media literacy, and structuring our world in such a way that the right thing is easier to do.

This will involve a mix of education, regulation, praise and condemnation, and civility.

Truth and deception

Generally we accept that true information is good and false information is bad. We live our lives under the assumption that most of what other people say to us is true. If most of what people say to each other were not true then communication would be useless. The underlying issue is one of trust. A society that lacks trust is dysfunctional. In many cases the intent behind falsehoods is to deceive or manipulate.

A little reflection shows that manipulation does not adhere with our values of democracy, autonomy, freedom, and self-determination. And it is these values (or others like them, which we can generally agree upon) that need to underpin our decisions about digital technology. If the technology, or the specific implementation of it, undermines our values then it ought to be condemned. Condemnation is often enough to cause a back-track, but where condemnation does not work, we need to regulate.

Misinformation and fake news are the current adaptive problem in our society, and we need to start developing adaptations to this challenge and driving these adaptations into our cultural suite of curriculum and norms.

Human beings have spent hundreds of thousands of years evolving psychological defenses against lies and manipulation when in close contact with other humans in small societies. We are very good at telling when we’re being deceived or manipulated in this context. However, many of the psychological and digital techniques used to spread fake news, sustain echo chambers, and coerce users are new and we do not yet have an innate suite of defenses. We need to decide whether it is unfair for governments, platforms, advertisers, and propagandists to use techniques we are not psychologically prepared for. We need to decide collectively if this kind of content is OK or needs to be condemned.

A values-based approach

Agreeing upon our values can be tricky, as political debates highlight. However, there is common ground, for example all sides of the political spectrum can usually agree that democracy is something worth valuing. We ought to have ongoing discussions that continually define and refine our values as a collective society.

It is not merely a case of ‘well that’s just the way it is’, we have the power to decide what is OK and what is not, but that depends on our values. Collective values can only be determined through in-depth discussion. We need community meetings, hui, focus groups, citizen juries, and surveys. We need to build up a picture of the foundations for an ethical approach.

Many of the concerns around digital media and AI algorithms are not new problems, they are existing threats re-packaged. Threats such as coercion and manipulation, hate and prejudice. Confronted with the task of designing intelligence, we are starting to say, ‘this application looks vulnerable to bias, or that application seems to undermine democracy…’

With great intelligence comes great responsibility

It’s not really AI or digital tech per se that we need to be ethical about, AI is just the implementation of intelligence, what we need to be ethical about is what we use intelligence for, and the intentions of those behind the deployment of intelligent methods.

We need to reflect on just what intelligence ought to be used for. And if we remember that we are intelligent systems ourselves, we ought to also reflect upon the moral nature of our own actions, individually and as a society. If we wouldn’t want an algorithm or AI to act in a certain way because it is biased or exploitative or enhances selfish interests, spreads falsehoods, or unduly concentrates power, should we be acting in that way ourselves in our day to day lives? Digital ethics starts with ethical people.

Developing digitally ethical developers

Many behaviours that were once acceptable have become unacceptable over time, instances of this are sometimes seen as ethical progress. It is possible that future generations will approach digital technologies in more inclusive and constructive ways than we’re seeing at present. In order to ensure that future digital technologies are developed ethically, we need to take a ground up approach.

School curricula need to include lessons to empower the next generation with an understanding of math and logic (so that the systems can be appreciated), language (to articulate concern constructively), ethical reasoning (not by decreeing morality, but by giving students the tools to reason in ethical terms), information literacy (this means understanding the psychological, network, and population forces that drive which information thrives and why), and epistemology (how to determine what is fact and what is not and why this matters). With this suite of cognitive tools, future developers will be more likely to make ethical decisions.

Ethical standards

As well as ensuring we bring up ethical developers, we need to make sure that the rules for development are ethical. This means producing a set of engineering standards surrounding the social impact of digital technologies. Much work has been published on the ethics of techniques like nudging and we need to distil this body of literature into guidance for acceptable designs. It may be the case that we need to certify developers who have signed up to such guidance or mandate such agreement as with other professional bodies.

As we build our world we build our ethics

The way we structure our world has impacts on how we behave and what information we access. Leaving your keys by the door reduces your chance or forgetting them, and building algorithms that reward engagement increases the chance of echo chambers and aggression.

We need to structure our world for civility by modeling and rewarding civil digital behavior. Mechanisms for community condemnation, down-voting, and algorithms that limit the way that digital conflict can spread may all be part of the solution. Sure, you can say what you want, but it shouldn’t be actively promoted unless it’s true (and evidenced) and decent.

We know from research that uncivil digital interactions decrease trust and this could undermine the values we hold as a society. Similarly we know that diverse groups make the best decisions, so platforms shouldn’t isolate groups of echo chambers. An ethical approach would ensure diverse opinions are heard by all.

Finally, from the top down

The relationship between legislation and norms is one of chicken and egg. Just as upholding norms can drive legislation, legislating can also drive norms.

It might be useful to have regulatory bodies such as ethical oversight committees, just as we have for medical research (another technological domain with the potential to impact the wellbeing of many people). Ethics committees can evaluate proposals or implemented technology and adjudicate on changes or modifications required for the technology to be ethically acceptable. Perhaps society could refer questionable technologies to these committee for evaluation, and problematic designs are sent for investigation. Our engineering standards, and our collectively agreed values, and tests of any intent to exploit and so on can then be applied.

Often an ethical approach means applying the ethics that we have built up over decades to a new domain, rather than dreaming up new ethics on the spot. It probably ought to be the case that we apply existing frameworks such as broadcasting and advertising standards, content restriction, and so on, to novel means of information distribution. Digital technologies should not undermine any rights that we have agreed in other domains.

False or unsupported claims are not permitted in advertising because we protect the right of people to not be misled and exploited. As a society we ought to condemn unsupported and false claims in other domains too, because of the risk of exploitation and manipulation for the gain of special interest groups.

The take home

Digital ethics (especially digital media) should be about ensuring that technology does not facilitate exploitation, deception, division, or undue concentration of power. Our digital ethics should protect existing rights and ensure that wellbeing is not impinged. To ensure this we need to take a multi-faceted approach with a long-term view. Through top-down, bottom-up and world structuring approaches, little by little we can move into an ethical digital world.

Do we care about future people?

dead planet

We have just published an article (free online) on existential risks – with a NZ perspective.1 A blog version is hosted by SciBlogs. What follows is the introduction to that blog:

Do we value future people?

Do we care about the wellbeing of people who don’t yet exist? Do we care whether the life-years of our great-grandchildren are filled with happiness rather than misery? Do we care about the future life-years of people alive now?

We are assuming you may answer “yes” in general terms, but in what way do we care?

You might merely think, ‘It’d be nice if they are happy and flourish’, or you may have stronger feelings such as, ‘they have as much right as me to at least the same kind of wellbeing that I’ve had’. The point is that the question can be answered in different ways.

All this is important because future people, and the future life-years of people living now, face serious existential threats. Existential threats are those that quite literally threaten our existence. These include runaway climate change, ecosystem destruction and starvation, nuclear war, deadly bioweapons2, asteroid impacts, or artificial intelligence (AI) run amok3 to name just a few…

Click here to read the full blog.

A response to the AI Forum NZ’s ‘Shaping a Future New Zealand’ report

puppet_original_9439

Last week the AI forum New Zealand released its report ‘Artificial Intelligence: Shaping a future New Zealand’. I wish to commend the authors for an excellent piece of horizon scanning, which lays the foundation for a much-needed ongoing discussion about AI and New Zealand, because, like the Wild West there is much as yet unknown regarding AI. Microsoft was at pains to point this out in their ‘The Future Computed’ report published earlier this year. In my reply I comment on some of the content of the AI Forum NZ’s report and also try to progress the discussion by highlighting areas that warrant further analysis. Like all futurism we can find the good the bad and the ugly within the report. Click here for a PDF of my full comments below.

The Good

The report has done a thorough job of highlighting many of the opportunities and challenges that face us all in the coming years. It is a necessary and very readable roadmap for how we might approach the issue of AI and New Zealand society. The fact the report is so accessible will no doubt be a catalyst to meaningful debate.

It was good to see insightful comments from Barrie Sheers (Managing Director, Microsoft NZ) at the outset, which set the tone for what at times was (necessarily) a whistle-stop tour of the web of issues AI poses. Barrie’s comments were nuanced and important, noting that those who design these technologies are not necessarily those who ought to decide how we use them. This is a key concept, which I will expand on below.

The report is generally upbeat about the potential of AI and gives us many interesting case studies. However, the ‘likely’, ‘many’, benefits of AI certainly do not give us carte blanche to pursue (or approve) any and all applications. We need a measured (though somewhat urgent) approach. Similarly, there is omission of some of the key threats that AI poses. For example, AI is suggested as a solution to problem gambling (p. 76), yet AI can also be used to track and persuade problem gamblers online, luring them back to gambling sites. For every potential benefit there is a flip side. AI is a tool for augmenting human ingenuity, and we must constantly be aware of the ways it could augment nefarious intentions.

It was good to see the report highlight the threat of autonomous weapons and the fact that New Zealand still has no clear position on this. We need to campaign forcefully against such weapons as we did with the issue of nuclear weapons. The reason for this is that in 2010 financial algorithms caused a $1 trillion dollar flash crash of the US stock market. Subsequent analysis has not satisfactorily revealed the reason for this anomaly. A ‘flash crash’ event involving autonomous weapons is not something we could simply trade out of a few minutes later.

The issue of risk and response lies at the heart of any thinking about the future of AI. One of the six key recommendation themes in the report centers on ‘Law, Ethics and Society’. There is a recommendation to institute an AI Ethics and Society Working Group. This is absolutely critical, and its terms of reference need to provide for a body that persists in its place for the foreseeable future. This working group needs to be tasked with establishing our values as a society, and these values need to shape the emergence of AI. Society as a whole needs to decide how we use AI tools and what constraints we place on development.

Ultimately, there probably ought to be a Committee for AI Monitoring, which distills evidence and research emerging locally and from around the world to quickly identify key milestones in AI development, and applications that pose a potential threat to the values of New Zealanders. This Committee probably ought to be independent of the Tech Industry, given Barrie Sheers comments above. Such a Committee would act as an ongoing AI fire alarm, a critical piece of infrastructure in the safe development of AI, as I discuss further below.

The Bad

Before I begin with the bad, I am at pains to emphasise that ‘Shaping a Future New Zealand’ is an excellent report, which covers a vast array of concepts and ideas, posing many important questions for debate. It is the quality of the report that draws me to respond and engage to further this important debate.

A key question this report poses is whether we will shape or be shaped by the emergence of AI. A key phrase that appears repeatedly in the document is ‘an ethical approach’. These two ideas together make me think that the order of material in the report is backwards in an important way. Re-reading Microsoft’s ‘The Future Computed’ report yesterday made me certain of this.

It may seem trivial, but in the AI Forum’s report, the section on ‘AI and the Economy’ precedes the section on ‘AI and Society’. This is to put the cart before the horse. Society gets to decide what we value economically, and also gets to decide what economic benefits we are willing to forgo in order to protect core values. We (society) get to shape the future, if we are willing and engaged. It is the societal and moral dimension of this issue that can determine what happens with AI and the economy. If we want to ‘shape’ rather than ‘be shaped’ then this is the message we need to be pushing. For this reason I think it is a mistake to give AI and the Economy precedence in the text.

A feature of the writing in this report is the abundance of definite constructions. These are constructions of the form ‘in the future X will be the case’. This is perhaps dangerous territory when we are predicting a dynamic, exponential system. Looking to Microsoft’s approach the phrase ‘no crystal ball’ stands out instead.

I’ll digress briefly to explain why this point is so critical. Rapidly developing systems change dramatically in ways that it is not easy for our psychology to grasp. Say you have a jar containing a bacterium (let the bacterium represent technical advances in AI, or the degree to which AI permeates every aspect of our world, or the number of malicious uses of AI, or some such thing). If the bacteria doubles in number every minute, and fills the jar after an hour, then by the time the jar is a quarter full (you’re really starting to notice it now, and perhaps are predicting what might happen in the future) you only have 2 minutes left to find another jar, and 2 minutes after that you’ll need 3 more jars. In the classic Hanson-Yudowsky debate about the pace of AI advance, what I’ve just illustrated represents the ‘AI-FOOM’ (rapid intelligence explosion) position. This is a live possibility. The future could still look very different from any or all of our models and predictions.

Furthermore, a disproportionate portion of the AI and the Economy section focuses on the issue of mass unemployment. This is the ‘robots will take our jobs’ debate. The argument here is protracted, far more detailed than any other argument in the document, and the conclusion is very strong. I think this is a mistake. Straining through models and analyses of spurious accuracy to reach an unambiguous conclusion that ‘AI will not lead to mass unemployment’ appears to be predetermined. The length of the reasoning (certainly compared to all other sections) conveys the illusion of certainty.

But we’re talking here about a tremendous number of unknowns. Including very many of Donald Rumsfeld’s infamous ‘unknown unknowns’, the things we don’t even know we don’t know yet. The modeling projects 20 years through this indeterminate melee and it is hard to accept such a definite conclusion (I know as much from looking at what past labour market models have predicted and what actually transpired). Prediction is hard, especially about the future. This is why trader, risk analyst and statistician Nassim Taleb encourages us to anticipate anything. The history of the world is shaped by Black Swans. These are unpredictable events that we rationalize after the fact, but which change everything. The only response to such uncertainty is to build resilience.

I’m not saying that there will be mass unemployment, I’m saying that trying to prove one way or the other is a risky approach. What I am saying is that the conclusion is misplaced, as risk analysts we ought not burn bridges like this. Let’s call a spade a spade. To me the argument in ‘Shaping a Future New Zelaand’ appears to be a rhetorical device put forward by those who don’t want us to contemplate massive labour force disruption. If people are afraid for their jobs, they are less likely to authorize AI (and given the moral precedence of society over economy authorize is the correct term).

But to take this argument even further, what is the reason that we fear mass unemployment? It’s not because of mass unemployment per se, it’s because unemployment can deny people meaningful activity in their life, and it can also cause economic pain. However, mass unemployment is only one way to cause these things. We should also be considering, for example, other ways that AI might deny us meaningful activity (with mass automation of decisions) or cause economic harm (through financial market collapse following an algorithmic mishap – financial error or financial terror) and so on. Mass unemployment is a side-show to the real discussion around value, meaning and risk that we need to be having.

By concluding that there is no risk, nothing to worry about, we risk being caught off-guard. A safer conclusion, and one that provides in fact much more security for everyone, is one that is reached without analysis. Maybe AI leads to mass unemployment, maybe it doesn’t. The problem is that if we don’t plan for what to do in the event, then we have built a fragile system (to use Taleb’s term).

By accepting at least the possibility of mass unemployment, we can invest in resilience measures, pre-empt any crisis, and plan to cope. We put that plan into action if and when the triggering events transpire. What we need is an insurance policy, not to hide our head in the sand. What we need is a fire alarm. That would be the way to allay fears. That would be how to ensure the system is antifragile.

Given the pace of AI innovation and surprising advances, we don’t know how New Zealand will be affected by AI, but we can control what we are comfortable permitting. This is why Society must precede Economy.

In fact this has been a weakness of much contemporary political reasoning. Problems are tackled on an ad hoc basis, to determine how they might economically benefit us. What is lacking is a set of overarching values that we hold as a society and that we apply to each problem to determine how we must respond (whether or not it accords with our best economic interests). Max Harris tackles this issue in his recent book ‘The New Zealand Project’.

So I return to the phrase, ‘an ethical approach’ which is the main theme of this report that needs unpacking. We need to decide as a society what our ethical approach is. We need a framework, which will determine whether each instance of AI is good, bad or ugly.

I’ll turn to a concrete example. If I’m being critical (which I am in the interests of really pushing this debate deeper) there are some important omissions from the report.

Notably, very little mention is made of the advertising and communications industry. This is surprising given recent developments with fake news, the Cambridge Analytica saga and the associated Facebook debacle. All of which are merely the tip of the iceberg of an industry that has already taken advantage of the fact that the public is generally ill-informed about the uses and possibilities of AI. Marketing is turning into manipulation. Attempts are being made to manipulate citizens to behave in ways that exploit them.

It’s debatable to what degree these techniques have succeeded to date, but remember that bacteria has only been growing in the jar for 58 minutes so far, so the tools are rudimentary (to stick with our analogy, the tools employed by Cambridge Analytica were only one quarter effective, in 4 minutes we face tools with eight times that effect! – look at AlphaGo Zero and think about how the relatively rule-based human social network game might be learned, and what the intentions might be of those who control that technology)

The point is that we are facing a situation where we humans, who possess a psychology riddled with unconscious heuristics and biases, and are simply not rational, no matter how much we rationalize our behavior, are faced with AI systems that on the one hand are dreadfully incompetent compared to ourselves, and yet on the other hand have immense knowledge of us and our tendencies. This latter feature means there is potential for a power imbalance in these interactions and we are the victims. This is the fundamental premise of the industry of nudging. Which when deployed with less than altruistic goals we can plainly call manipulation.

The AI Forum report contains very little on manipulation and disinformation by AI, or the potential horror scenarios of AI impersonating (convincingly) powerful human individuals. We are going to need to solve the problem of trust and authenticity very quickly, and more importantly, to start to condemn attempts to impersonate and mislead.

We need more discussion about the degree to which we ought to require AI systems with which we interact to disclose their goals to us. Is this system’s goal to make me buy a product? To stop me changing banks? To make me vote for Trump? To maximize the amount I spend online gambling? Perhaps we need regulation that makes AI developers ensure that AIs must declare that they are AIs.

The reason for this is because humans have evolved a very effective suite of defenses against being swindled by humans, but we are unfamiliar with the emerging techniques of AI. Unlike when I deal with a human, I’m unfamiliar with the knowledge and techniques of my potential manipulator. Private interests are going to flock to manipulation tools that allow them to further their interests.

There is one line in the report addressing this issue of manipulation by AI, but it is an important line. The Institute of Electrical and Electronics Engineers is in the process of drafting an engineering standard about ethical nudging. This certainly gets to the heart of this issue, but it remains to be seen what that standard is, what kinds of systems it covers, and who will adopt it. We could have done with such a standard before Cambridge Analytica, but we still need ways to make businesses adhere to it. New Zealand needs to be having values-based discussions about this kind of thing, and we need to be monitoring overseas developments so that we have a say, and do not get dragged along by someone else’s standards.

The Ugly

The report does a good job of laying out the strategies other nations are employing to maximize the probability of good AI outcomes. These case studies certainly make New Zealand look late to the party. However, there is no discussion of what is ultimately needed, which is a global body. We need an internationally coordinated strategy of risk management. This will be essential if nations do not want to be at the receiving end of AI use that they do not condone themselves. This is a coordination problem. We need to approach this from a values and rights perspective, and New Zealand has some successful history of lobbying the globe on issues like this.

The report highlights some potential threats to society, such as bias, transparency, and accountability issues. However, there are many further risks such as those that exploit surveillance capitalism, or threaten autonomy. Given that there are potential looming threats from AI, to individuals open to exploitation, to democratic elections from attempts at societal manipulation, to personal safety from autonomous agents, and so on, what we need is more than just a working group. It is very apparent that we need an AI fire alarm.

Even if we manage to approach AI development ‘in an ethical way’ (there’s that phrase again) and ensure that no one should design AI that seeks to exploit, manipulate, harm or create chaos, we will need to be able to spot such malicious, and quite probably unexpected acts before they cause damage. Furthermore, many private entities are more concerned with whether their behavior is legal rather than ethical. The difference is substantial. This is why we need a Committee for Monitoring AI. I’ll explain.

Fire is a useful technology with many societal and economic benefits, but it can go wrong. Humans have been worrying about these side-effects of technology since the first cooking fire got out of control and burned down the forest.

Eliezer Yudowsky has written a powerful piece about warning systems and their relevance to AI. Basically he notes that fire alarms don’t tell you when there is a fire (this is because most times they ring there is no fire). But conversely the sight of smoke doesn’t make you leap into action. This is especially true if you are a bystander in a crowd (perhaps it’s just someone burning the toast? Surely someone else will act, and so on). What fire alarms do is they give you permission to act. If the alarm sounds, it’s OK to leave the building. It’s OK to get the extinguisher. You’re not going to look silly. The proposed AI Ethics and Society working group, and my suggested Committee for Monitoring AI ought to act as fire alarms.

Perhaps a system of risk levels is needed that account for the scale of the particular AI risk, its imminence, and the potential impact; a colour-coded system to issue warnings. Importantly, this needs to work at a global not just local level due to the threat from outside and the lack of national boundaries for many AI applications. Our global interactions around AI need to extend beyond learning from foreign organisations and sharing gizmos.

Overall, we need to shift the focus around AI innovation from one of rapid development to market, to one concerned with risk and reliability. AI as a technology has more in common with anaesthesia or aviation than with sports shoes or efficient light bulbs. Like aviation, we need to ensure high-reliability AI infrastructure when AI is at the helm of logistics and food supply, power grids, self-driving cars and so on. We need redundancy, and I’m not confident this will be implemented especially given the single point of failure systems we still have commanding our telecommunications network in New Zealand. A human factors, safety systems engineering approach is needed, and this will require large changes to computer science and innovation training.

Conclusions

The AI Forum New Zealand is to be commended for a detailed yet accessible report on the state of play of AI in New Zealand. These are exciting times. Overall the urgency with which this report insists we must act is absolutely correct.

The Recommendations section begins, ‘Overall, the AI Forum’s aim is for New Zealand to foster an environment where AI delivers inclusive benefits for the entire country’. This must be the case. We just need to work hard to make it happen. The best way to ensure inclusive benefits is to settle on a value framework, which will enable us to unpack the elusive ‘ethical approach’. By running each possibility through our values we can decide quite simply whether to encourage or condemn the application.

Like any tool, AI can be used for good or for bad, and no doubt most applications will simply be ugly. The report claims that some of the important potential harms, for example criminal manipulation of society, are as yet ‘unquantified’. Well, it is not only criminals that seek to manipulate society, and to be honest, I’m not one for waiting around until harmful activity is quantified.

We need to decide what is OK and what is not, anticipating what might be coming. As the report indicates, this will require ethical and legal thinking, but also sociological, philosophical, and psychological. I would argue that a substantial portion of the Government’s Strategic Science Investment Fund be dedicated to facilitating these critical allied discussions and outputs.

Most of all we need to design for democracy and build an antifragile New Zealand. As a Society we must indeed work to shape the future. What values are we willing to fight for and what are we willing to sell-out?

Can Siri help you quit smoking?

iphone-hey-siri-670x335

So you want to quit smoking. But you want to do it right, with expert advice and evidence-based information. Should you ask Siri?

This week my co-author Nick Wilson and I published results of a pilot study reporting how effective personal digital assistants are at providing information or advice to help you quit smoking.

As far as we are aware our study is the first study looking at whether Siri or Google Assistant can help you quit.

The internet is widely used for obtaining health-related information and advice. For example, in the United Kingdom, 41% of internet users report going online to find information for health-related issues, with about half of these (22% of all users) having done so in the previous week.

We compared voice-activated internet searches by smartphone (two digital assistants) with laptop ones for information and advice related to smoking cessation.

We asked Siri and Google Assistant three sets of questions. We entered the same questions into Google as an internet search on laptops.

The first set of questions were adapted from the ‘frequently asked questions’ on the UK National Health Service (NHS) smokefree website.

The next set of questions were related to short videos on smoking-related disease produced by the Centers for Disease Control and Prevention (CDC) in the USA.

The final set of questions we devised to test responses to a range of features such as, finding smoking-related pictures, diagrams, instructional videos; and navigating to the nearest service/retailer for quitting-related products.

We graded the quality of the information and advice using a three tier system (A,B,C) where A represented health agencies which had significant medical expertise, B was for sites with some expertise (e.g. Wikipedia) and C was for news items, or magazine style content.

Google laptop internet searches gave the best quality smoking cessation advice 83% of the time, with Google Assistant on 76% and Siri 28% (equal firsts were possible).

The best search results by any device used expert (grade ‘A’) sources 59% of the time. Using all three methods failed to find relevant information 8% of the time, with Siri failing 53% of the time.

We found that Siri was poor when videos were requested according to the content the might contain, all three tools sometimes returned magazine or blog content instead of professional health advice, and we found that all tools had trouble when gay and lesbian-specific information was requested.

A weakness of our small pilot study was that we only considered the first result returned in each search.

Overall, while expert content was returned over half the time, there is clearly room for improvement in how these software systems deliver smoking cessation advice. We would encourage software firms to work with professional health organisations on ways to enhance the quality of smoking cessation advice returned.

See Adapt Research Ltd’s related blog: ‘To vape or not to vape… is not the right question

Health inequalities in NZ

inequality

Everyone knows that socio-economic inequalities in health exist – in recent times. But one thing we do not know is whether they have always been there. Adapt Research Ltd contributed to a just published study that looks at two historical datasets – with one of these suggesting life span differences by occupational class as measured 100 years ago.

The study found strong differences in life expectancy by occupational class among men enlisted to fight in the First World War (but not actually getting to the frontline). Whilst not definitive evidence (it is hard to get perfect evidence from 100 years ago!), it does suggest that socio-economic inequalities in mortality have existed for at least 100 years in NZ.

In this blog we also take the opportunity to discuss what might be done to address the current inequality problem in this country, this is especially relevant given the Tax Review currently underway… Click here to read the full blog (hosted externally).

Reducing Harm from Falls: What have we learned in the last 12 months?

tia-chi

Falls are a major cause of injury and reduced quality of life for older people.

Estimates suggest that more than a quarter of people over 65 years of age fall in any given year. Broken hips and head injuries are among the most serious complications of a fall.

This is why the New Zealand Health Quality and Safety Commission has spent several years encouraging the health sector in New Zealand to implement programmes that reduce harm from falls.

A set of evidence-based guidance, the ‘Ten Topics’ is available on the Commission’s website.

However, every day more than 4 new research papers about falls are published. The Safety Lit database contains over 1500 items for ‘Falls’ in 2017 alone.

Every week there are new systematic reviews, meta-analyses, guidelines or health technology assessments.

So what does all this new evidence tell us?

We know that in New Zealand aged residential care facilities 13% of patients have a fall in the previous 30 days.

We also already know that bisphosphonates are an important medication to fight osteoporosis and prevent fragility fractures in older adults. But did you know that they are cost-effective at a fracture risk of just 1%? And yet in New Zealand there is scope to increase the rate of bisphosphonate prescribing for patients suffering fragility fractures.

Strength and balance exercise programmes can help prevent falls and new evidence suggests that Tai Chi is also effective.

As for medications, data previously appeared to show that antihypertensives increase the risk for falls. But some large new reviews indicate this may mostly be due to falls in the first 24 hours after a dose adjustment, or if the patient is taking diuretics.

Selective beta blockers may not increase the risk of falls, and treating hypertension to guideline levels is likely to be safe.

We are now pretty sure that prescribing vitamin D to otherwise healthy older people does not prevent falls or fractures.

However, sleep disturbances can increase the risk of falls.

Finally, home safety assessment and modification programmes and in-home strength and balance exercise programmes appear cost-effective in the New Zealand context.

The above is just a taste of the new evidence available to help in reducing harm from falls, and the Commission’s website indicates that their ‘Recommended Evidence-based resources’ webpage will be updated annually.

So don’t just take my word for it, examine the new evidence for yourself, and we can all look forward to the next comprehensive update.

For now, the Ten Topics are an excellent resource for anyone who is in the business of reducing harm from falls, and reducing harm from falls is everyone’s business!

 

AI Update: Have you seen my Putin Impersonation? It’s a blast.

URKDagg

Two recent articles in the media reminded me of a concern about AI.

Firstly, the BBC asks us whether we would care if a feature article was, ‘written by a robot’? The implication is clear, it will soon be the norm for digital content to be created by intelligent systems.

Secondly, and more menacingly, another BBC report suggests that hacked video and voice synthesis tools will soon be producing lifelike quality. The actual report cited by the BBC story can be found here: The Malicious Use of AI Report.

Impersonating humans

It may not be long before digital tools can convincingly (read indistinguishably) produce fake video and speech, thereby impersonating specific human beings. This content could be augmented with mannerisms and linguistic style harvested from previous online posts, speeches, comments or video produced by the target.

What emerges is a simulacrum. Reality recedes and we can no longer tell what is real and what is not.

The problem with all this is that very soon the capacity of someone, or some system with advanced voice, video, and linguistic style tools at their disposal will be able to convincingly impersonate human beings. To all intents and purposes they will be able to hack reality.

Impersonation does not mean that the system doing the impersonating will need to interact with people or pass a Turing test. All that is required is that the content produced is convincing.

It is not completely in the realm of fantasy to picture ‘Vladimir Putin’ giving the order to use nuclear weapons, and this order is either acted upon, or retaliated against. Many other undesirable situations are possible, both mundane and terrifying.

A host of companies are already working very hard, and with much success, to create impersonations, of their own staff, for the purpose of interacting with customers.

Currently many states have laws forbidding the impersonation of a person, by another person, but do our laws adequately forbid the impersonation of a human being by a digital system? And even if it forbidden, how will we enforce this?

Trust: the next big human problem

Humans over time have struggled with some key societal problems, and found solutions. These include:

  • the problem of coordination and cooperation, solved by language and written symbols
  • the problem of exchange, solved by money
  • the problem of information dissemination, solved by the printing press
  • the problem of scale, solved by the industrial revolution

We now must solve The Problem of Trust and Authenticity.

This has been demonstrated by the debacle around Fake News, where Facebook and other digital media companies are scrambling to implement change.

We need to solve the issue of trust, at the levels of moral norms, digital systems, societal systems, and laws.

Technologies like blockchain are small steps in the right direction, but the problem persists and other solutions and constraints are needed.

We need to have a conversation as a society about what kind of future we want to live in, and what limits, laws and norms we want to impose on emerging technologies, particularly artificial intelligence, which by it’s very nature mimics us.

This conversation needs to begin now and it needs to involve the technology sector, a fully informed general public, and the government.

Click here to listen to a talk by Adapt Research Ltd’s Matt Boyd on AI and the media given at the NZ Philosophy Conference in 2017. 

From Russia with Love: Time for serious work on the benefits and risks of artificial intelligence

The following introduction is an excerpt from a just published blog at Sciblogs that examines the role of AI in society and democracy. Read the full article here

Transformative advances in artificial intelligence (AI) have generated much hype and a burst of dialogue in New Zealand. Past technological change has led to adaptation, but adaptation takes time and the pace with which AI is arriving appears to be accelerating. For example, recent news about the unfolding ‘Russia Investigation’ may be just a prelude to what is possible if AI tools hijack our social systems.

Technology offers us opportunities to do things we previously could not, but in doing so the use of technology also changes us, and it changes the systems and norms of society.

Read more…

Artificial Intelligence, Freedom and Democracy: Talk at NZ Association of Philosophers Conference Dec 4, 2017

“Artificial Intelligence and Free Will: A 2017 Christmas Carol”

(Talk presented by Matt Boyd at the NZAP Conference)