What does taking an ethical approach to digital media and artificial intelligence mean?

Image result for digital ethics

Several recent reports have argued that we need to take ‘an ethical approach’ when designing and using digital technologies, including artificial intelligence.

Recent global events such as various Facebook, fake news, and Cambridge Analytica scandals appear to emphasize this need.

But what does taking an ethical approach entail? Here are a few ideas I’ve spent Sunday morning thinking about.

Most importantly, there is no one thing that we can do to ensure that we design and use digital technology ethically. We need to start embarking on a multifaceted approach that ensures we act as we ought to moving forward.

There is onus upon governments, developers, content creators, users, educators and society generally. We need to ensure that we act ethically, and also that we can spot unethical actors. This requires a degree of ethical literacy, information and media literacy, and structuring our world in such a way that the right thing is easier to do.

This will involve a mix of education, regulation, praise and condemnation, and civility.

Truth and deception

Generally we accept that true information is good and false information is bad. We live our lives under the assumption that most of what other people say to us is true. If most of what people say to each other were not true then communication would be useless. The underlying issue is one of trust. A society that lacks trust is dysfunctional. In many cases the intent behind falsehoods is to deceive or manipulate.

A little reflection shows that manipulation does not adhere with our values of democracy, autonomy, freedom, and self-determination. And it is these values (or others like them, which we can generally agree upon) that need to underpin our decisions about digital technology. If the technology, or the specific implementation of it, undermines our values then it ought to be condemned. Condemnation is often enough to cause a back-track, but where condemnation does not work, we need to regulate.

Misinformation and fake news are the current adaptive problem in our society, and we need to start developing adaptations to this challenge and driving these adaptations into our cultural suite of curriculum and norms.

Human beings have spent hundreds of thousands of years evolving psychological defenses against lies and manipulation when in close contact with other humans in small societies. We are very good at telling when we’re being deceived or manipulated in this context. However, many of the psychological and digital techniques used to spread fake news, sustain echo chambers, and coerce users are new and we do not yet have an innate suite of defenses. We need to decide whether it is unfair for governments, platforms, advertisers, and propagandists to use techniques we are not psychologically prepared for. We need to decide collectively if this kind of content is OK or needs to be condemned.

A values-based approach

Agreeing upon our values can be tricky, as political debates highlight. However, there is common ground, for example all sides of the political spectrum can usually agree that democracy is something worth valuing. We ought to have ongoing discussions that continually define and refine our values as a collective society.

It is not merely a case of ‘well that’s just the way it is’, we have the power to decide what is OK and what is not, but that depends on our values. Collective values can only be determined through in-depth discussion. We need community meetings, hui, focus groups, citizen juries, and surveys. We need to build up a picture of the foundations for an ethical approach.

Many of the concerns around digital media and AI algorithms are not new problems, they are existing threats re-packaged. Threats such as coercion and manipulation, hate and prejudice. Confronted with the task of designing intelligence, we are starting to say, ‘this application looks vulnerable to bias, or that application seems to undermine democracy…’

With great intelligence comes great responsibility

It’s not really AI or digital tech per se that we need to be ethical about, AI is just the implementation of intelligence, what we need to be ethical about is what we use intelligence for, and the intentions of those behind the deployment of intelligent methods.

We need to reflect on just what intelligence ought to be used for. And if we remember that we are intelligent systems ourselves, we ought to also reflect upon the moral nature of our own actions, individually and as a society. If we wouldn’t want an algorithm or AI to act in a certain way because it is biased or exploitative or enhances selfish interests, spreads falsehoods, or unduly concentrates power, should we be acting in that way ourselves in our day to day lives? Digital ethics starts with ethical people.

Developing digitally ethical developers

Many behaviours that were once acceptable have become unacceptable over time, instances of this are sometimes seen as ethical progress. It is possible that future generations will approach digital technologies in more inclusive and constructive ways than we’re seeing at present. In order to ensure that future digital technologies are developed ethically, we need to take a ground up approach.

School curricula need to include lessons to empower the next generation with an understanding of math and logic (so that the systems can be appreciated), language (to articulate concern constructively), ethical reasoning (not by decreeing morality, but by giving students the tools to reason in ethical terms), information literacy (this means understanding the psychological, network, and population forces that drive which information thrives and why), and epistemology (how to determine what is fact and what is not and why this matters). With this suite of cognitive tools, future developers will be more likely to make ethical decisions.

Ethical standards

As well as ensuring we bring up ethical developers, we need to make sure that the rules for development are ethical. This means producing a set of engineering standards surrounding the social impact of digital technologies. Much work has been published on the ethics of techniques like nudging and we need to distil this body of literature into guidance for acceptable designs. It may be the case that we need to certify developers who have signed up to such guidance or mandate such agreement as with other professional bodies.

As we build our world we build our ethics

The way we structure our world has impacts on how we behave and what information we access. Leaving your keys by the door reduces your chance or forgetting them, and building algorithms that reward engagement increases the chance of echo chambers and aggression.

We need to structure our world for civility by modeling and rewarding civil digital behavior. Mechanisms for community condemnation, down-voting, and algorithms that limit the way that digital conflict can spread may all be part of the solution. Sure, you can say what you want, but it shouldn’t be actively promoted unless it’s true (and evidenced) and decent.

We know from research that uncivil digital interactions decrease trust and this could undermine the values we hold as a society. Similarly we know that diverse groups make the best decisions, so platforms shouldn’t isolate groups of echo chambers. An ethical approach would ensure diverse opinions are heard by all.

Finally, from the top down

The relationship between legislation and norms is one of chicken and egg. Just as upholding norms can drive legislation, legislating can also drive norms.

It might be useful to have regulatory bodies such as ethical oversight committees, just as we have for medical research (another technological domain with the potential to impact the wellbeing of many people). Ethics committees can evaluate proposals or implemented technology and adjudicate on changes or modifications required for the technology to be ethically acceptable. Perhaps society could refer questionable technologies to these committee for evaluation, and problematic designs are sent for investigation. Our engineering standards, and our collectively agreed values, and tests of any intent to exploit and so on can then be applied.

Often an ethical approach means applying the ethics that we have built up over decades to a new domain, rather than dreaming up new ethics on the spot. It probably ought to be the case that we apply existing frameworks such as broadcasting and advertising standards, content restriction, and so on, to novel means of information distribution. Digital technologies should not undermine any rights that we have agreed in other domains.

False or unsupported claims are not permitted in advertising because we protect the right of people to not be misled and exploited. As a society we ought to condemn unsupported and false claims in other domains too, because of the risk of exploitation and manipulation for the gain of special interest groups.

The take home

Digital ethics (especially digital media) should be about ensuring that technology does not facilitate exploitation, deception, division, or undue concentration of power. Our digital ethics should protect existing rights and ensure that wellbeing is not impinged. To ensure this we need to take a multi-faceted approach with a long-term view. Through top-down, bottom-up and world structuring approaches, little by little we can move into an ethical digital world.

Author: Adapt Research

Adapt Research provides high quality evidence-based medical, technical and academic research, writing and analysis services to universities, government departments, and private firms. I am available for large and small research projects, peer review, and medical writing assignments of any size

One thought on “What does taking an ethical approach to digital media and artificial intelligence mean?”

Leave a comment