AI Update: Have you seen my Putin Impersonation? It’s a blast.

URKDagg

Two recent articles in the media reminded me of a concern about AI.

Firstly, the BBC asks us whether we would care if a feature article was, ‘written by a robot’? The implication is clear, it will soon be the norm for digital content to be created by intelligent systems.

Secondly, and more menacingly, another BBC report suggests that hacked video and voice synthesis tools will soon be producing lifelike quality. The actual report cited by the BBC story can be found here: The Malicious Use of AI Report.

Impersonating humans

It may not be long before digital tools can convincingly (read indistinguishably) produce fake video and speech, thereby impersonating specific human beings. This content could be augmented with mannerisms and linguistic style harvested from previous online posts, speeches, comments or video produced by the target.

What emerges is a simulacrum. Reality recedes and we can no longer tell what is real and what is not.

The problem with all this is that very soon the capacity of someone, or some system with advanced voice, video, and linguistic style tools at their disposal will be able to convincingly impersonate human beings. To all intents and purposes they will be able to hack reality.

Impersonation does not mean that the system doing the impersonating will need to interact with people or pass a Turing test. All that is required is that the content produced is convincing.

It is not completely in the realm of fantasy to picture ‘Vladimir Putin’ giving the order to use nuclear weapons, and this order is either acted upon, or retaliated against. Many other undesirable situations are possible, both mundane and terrifying.

A host of companies are already working very hard, and with much success, to create impersonations, of their own staff, for the purpose of interacting with customers.

Currently many states have laws forbidding the impersonation of a person, by another person, but do our laws adequately forbid the impersonation of a human being by a digital system? And even if it forbidden, how will we enforce this?

Trust: the next big human problem

Humans over time have struggled with some key societal problems, and found solutions. These include:

  • the problem of coordination and cooperation, solved by language and written symbols
  • the problem of exchange, solved by money
  • the problem of information dissemination, solved by the printing press
  • the problem of scale, solved by the industrial revolution

We now must solve The Problem of Trust and Authenticity.

This has been demonstrated by the debacle around Fake News, where Facebook and other digital media companies are scrambling to implement change.

We need to solve the issue of trust, at the levels of moral norms, digital systems, societal systems, and laws.

Technologies like blockchain are small steps in the right direction, but the problem persists and other solutions and constraints are needed.

We need to have a conversation as a society about what kind of future we want to live in, and what limits, laws and norms we want to impose on emerging technologies, particularly artificial intelligence, which by it’s very nature mimics us.

This conversation needs to begin now and it needs to involve the technology sector, a fully informed general public, and the government.

Click here to listen to a talk by Adapt Research Ltd’s Matt Boyd on AI and the media given at the NZ Philosophy Conference in 2017. 

%d bloggers like this: