AI content targeting may violate human rights

Image result for human rights and computers

Does AI driven micro targeting of digital content violate human rights? The UN says ‘yes!’

Last month the United Nations published a document on AI and human rights with a particular focus on automated content distribution. The report focuses on the rights to freedom of opinion and expression, which are often excluded from public and political debates on artificial intelligence.

The overall argument is that an ethical approach to AI development, particularly in the area of content distribution, is not a replacement for respecting human rights.

Automation can be a positive thing, especially in cases where it can remove human operator bias. However, automation can be negative if it impedes the transparency and scrutability of a process.

AI dissemination of digital content

The report outlines the ways in which content platforms moderate and target content and how opaque AI systems could interfere with individual autonomy and agency.

Artificial intelligence is proving problematic in the way it is deployed to assess content and prioritize which content is shown to which users.

“Artificial intelligence evaluation of data may identify correlations but not necessarily causation, which may lead to biased and faulty outcomes that are difficult to scrutinize.”

Without ongoing supervision, AI systems may “identify patterns and develop conclusions unforeseen by the humans who programmed or tasked them.”

Browsing histories, user demographics, semantic and sentiment analyses and numerous other factors, are used to determine which content is presented to whom. Paid content often supplants unpaid content. The rationale behind these decisions is often opaque to users and often to platforms too.

Additionally, AI applications supporting digital searches massively influence the dissemination of knowledge and this personalization can minimize exposure to diverse views. Biases are reinforced and inflammatory content or disinformation is promoted as the system measures success by online engagement. The area of AI and human values alignment is begging for critical research and is discussed in depth by AI safety researcher Paul Christiano elsewhere.

From the point of view of human autonomy these systems can interfere with individual agency to seek and share ideas and opinions across ideological, political or societal divisions, undermining individual choice to find certain kinds of information.

This is especially so because algorithms typically will deprioritize content with lower levels of engagement (e.g. minority content). Also, the systems are often hijacked via bots, metadata hacks, and possibly by adversarial content.

Not only is much content obscured from many users, but otherwise well-functioning AI systems can be tripped up by small manipulations to the input. Without a ‘second look’ at the context (as our hierarchically structured human brain does when something seems amiss) AI can be fooled by ‘adversarial content’.

For example, in the images below the AI identifies the left picture as ‘fox’ and the slightly altered right picture as ‘puffer fish’. An equally striking example is the elephant vs sofa error , which is clearly due to a shift in context.

Screen Shot 2018-10-20 at 4.19.06 pm

Cultural context is particularly good at tripping up AI systems and results in content being removed due to biased or discriminatory concepts. For example, the DeepText AI identified “Mexican” as a slur due to the context of its use in textual analysis. Such content removal is another way that AI can interfere with user autonomy.

There is an argument that individuals should be exposed to parity and diversity in political messaging, but micro targeting of content is creating a “curated worldview inhospitable to pluralistic political discourse.”

Overall, AI targeting of content incentivizes broad collection of personal data and increases the risk of manipulation through disinformation. Targeting can exclude whole classes of users from information or opportunities.

So what should we be doing about all this? The UN report offers a vision for a human rights based approach to AI and content distribution.

A Human Rights Legal Framework for AI

The UN report outlines the scope of human rights obligations in the context of artificial intelligence and concludes that:

“AI must be deployed so as to be consistent with the obligations of States and the responsibilities of private actors under international human rights law. Human rights law imposes on States both negative obligations to refrain from implementing measures that interfere with the exercise of freedom of opinion and expression and positive obligations to promote rights to freedom of opinion and expression and to protect their exercise.”

What does this mean?

All people have the right to freedom of opinion without interference.

(this is guaranteed by article 19 (1) of the International Covenant on Civil and Political Rights and article 19 of the Universal Declaration of Human Rights)

Basically, undue coercion cannot be employed to manipulate an individual’s beliefs, ideologies, reactions and positions. We need to have a public discussion about the limits of coercion or inducement, and what might be considered interference with the right to form an opinion.

The reason that this is a novel issue is because AI curation of online content is now micro targeting information “at a scale beyond the reach of traditional media.” Our present norms (based on historical technologies) may not be up to the task of adjudicating on novel techniques.

The UN report argues that companies should:

“at the very least, provide meaningful information about how they develop and implement criteria for curating and personalizing content on their platforms, including policies and processes for detecting social, cultural or political biases in the design and development of relevant artificial intelligence systems.”

The right to freedom of expression may also be impinged by AI curation. We’ve seen how automated content takedown may run afoul of context idiosyncrasies. This can result in the systematic silencing of individuals or groups.

The UN Human Rights Committee has also found that States should “take appropriate action … to prevent undue media dominance or concentration by privately controlled media groups in monopolistic situations that may be harmful to a diversity of sources and views.”

Given these problems, more needs to be done to help users understand what they are presented with. There are some token gestures toward selectively identifying some sponsored content, but users need to be presented with relevant metadata, sources, and the alternatives to the content that they are algorithmically fed. Transparency, for example, about confidence measures, known failure scenarios and appropriate limitations on use would be of great use.

We all have a right to privacy, yet AI systems are presently used to infer private facts about us, which we may otherwise decline to disclose. Information such as sexual orientation, family relationships, religious views, health conditions or political affiliation can be inferred from network activity and even if not explicitly stated these inferences can be represented implicitly in neural nets and drive content algorithms.

These features of AI systems could violate the obligation of non-discrimination.

Finally, human rights law guarantees individuals whose rights are infringed, a remedy determined by competent judicial, administrative or legislative authorities. Remedies “must be known by and accessible to anyone who has had their rights violated,” but the logic behind an algorithmic decision may not be evident even to an expert trained in the underlying mechanics of the system.

Solutions & Standards

We need a set of substantive standards for AI systems. This must apply to companies and to States.

Companies need professional standards for AI engineers, which translate human rights responsibilities into guidance for technical design. Codes of ethics (such as those now adopted by most of the major AI companies) may be important but are not a substitute for recognition of human rights.

Human rights law is the correct framework within which we must judge the performance of AI content delivery systems.

Companies and governments need to embrace transparency and simple explanations about the functioning of systems will go a long way to contribute to public discourse, education and debate on this issue.

The UN report also recommends processes for artificial intelligence systems that include: human rights impact assessments, audits, a respect for personal autonomy, notice and consent processes, and remedy for adverse impacts.

The report concludes with recommendations for States and for Companies, which includes the recommendation that, “companies should make all artificial intelligence code fully auditable.”

All this sounds very sensible and is a conservative approach to what could rapidly become an out of control problem of information pollution.

If anyone is interested in my further thoughts on “AI, Freedom and Democracy”, you can listen to my talk at the NZ Philosophy Conference 2017 here.

Nuclear insanity has never been worse

nuclear_winter_podcast-1030x466

Donald Trump has just announced a likely build up of US nuclear capability

The threat of nuclear war has probably never been higher, and continues to grow. Given emotional human nature, cognitive irrationality and distributed authority to strike, we have merely been lucky to avoid nuclear war to date.

These new moves without a doubt raise the threat of a human extinction event in the near future. The reasons why are explained in a compelling podcast by Daniel Ellsberg

Ellsberg (the leaker of the Pentagon Papers that ended the Nixon presidency) explains the key facts.  Contemporary modelling shows the likelihood of a nuclear winter is high if more than a couple of hundred weapons are detonated. Previous Cold War modelling ignored the smoke from burning radioactive fires, and so vastly underestimated the risk.

On the other hand, detonation of a hundred or so warheads poses low or no risk of nuclear winter (merely catastrophic destruction). As such, and as nuclear strategist Ellsberg forcefully argues, the only strategically relevant nuclear weapons are those on submarines. This is because they cannot be targeted by pre-emptive strikes, and yet still (with n = 300 or so) provide the necessary deterrence.

Therefore, land-based ICBMs are of no strategic value whatsoever, and merely provide additional targets for additional weapons, thereby pushing the nuclear threat from the deterrence/massive destruction game into the human extinction game. This is totally unacceptable.

Importantly, Ellsberg further argues that the reason the US is so determined to continue to maintain and build nuclear weapons is because of the billions of dollars that it generates in business for Lockhead Martin, Boeing, etc. We are escalating the risk of human extinction in exchange for economic growth.

John Bolton, Trump’s National Security Advisor, is corrupted by the nuclear lobbyists and stands to gain should capabilities be expanded.

There is no military justification for more than a hundred or so nuclear weapons (China’s nuclear policy reflects this – they are capable of building many thousands, but maintain only a fraction of this number). An arsenal of a hundred warheads is an arsenal that cannot destroy life on planet Earth. If these are on submarines they are difficult to target. Yet perversely we sustain thousands of weapons, at great risk to our own future.

The lobbying for large nuclear arsenals must stop. The political rhetoric that this is for our own safety and defence must stop. The drive for profit above all else must stop. Our children’s future depends on it.

%d bloggers like this: