Pandemic Catastrophe: ‘Lifeboat’ is the wrong metaphor

Lifeboat

I recently published an academic paper about island refuges in extreme pandemics with my co-author Nick Wilson. The paper has become the focus of media attention, including articles by Newsweek, Fox News and IFLScience.

Unfortunately, the key argument of the paper has been misconstrued in several reports. The headlines included click bait such as, ‘If A Pandemic Hits, This Is Where Humanity Should Go To Survive’ and ‘Scientists rank safest spots to flee to if apocalyptic outbreak begins to wipe out humanity’.

Our Conclusion

The conclusion of our argument was almost the opposite. We argued that preparations could be made ahead of the fact, so that the designated island refuge(s) could be closed off almost immediately when signals indicate a catastrophic pandemic is emerging. There would be no fleeing to the ‘lifeboat’ allowed.

In fact, the metaphor of a ‘lifeboat’ is misleading, because people scramble for a lifeboat when the disaster strikes. Our argument (which I explain below for those who have not read our paper) is that the islands most likely to be able to reboot a flourishing technological human culture after the event, should be identified ahead of time, and plans enacted to prevent people arriving when the catastrophe strikes (i.e. through border closure measures).

Information Hazard

In the literature on catastrophic and existential risk mitigation, the concept of an ‘information hazard’ is the idea that some information when spread actually increases the risk of a catastrophic outcome. The ‘run to the island refuge’ approach is an information hazard. If people actually behaved like this, it would undermine the effectiveness of the refuge, and increase the probability of (in this case) human extinction.

Our argument is about preserving technological culture, ensuring the continuation of the ‘human project’ and the flourishing of future generations. It is not about saving people around the world at the time of the catastrophe.

The fact that any particular people might survive the event is incidental to the bigger picture that some people could survive and ensure a future full of human beings and full of technological culture.

Global Representation

We identified three promising options for such an island refuge to preserve humanity. And one might argue that it is not fair on the diverse cultures of the world if just Iceland or just Australia, for example, survive the catastrophe.

However, there is nothing preventing plans for a designated refuge to include representation from all the worlds people, perhaps a system of rotating work visas, so that the refuge can host, at any given time, a sample of the members of each of the world’s jurisdictions. Whoever happens to be there at the time of the catastrophe would then effectively get ‘locked in’ and would represent their culture as the world is re-booted.

Providing such visas and hosting these diverse people could be an undertaking that the designated refuge nation(s) take on, perhaps in lieu of providing development assistance to the rest of the world, so that the refuge nation is not unfairly burdened should the refuge never be needed (which is also a likely outcome).

Our paper is only the first step in a conversation about how the world might mitigate the risk from extreme pandemics, and we encourage other ideas so that we can little by little implement strategies that reduce the probability such unlikely but devastating events can destroy humanity.

The TLDR version of our paper (570 words):

The risk of human extinction is probably rising. This is because, although the background risk from such natural events as supervolcanic eruption, or asteroid strikes has remained consistent, the risk from technological catastrophes rises with our technological prowess (think nuclear, nano, bio, and AI technologies) which is by definition unprecedented in human history.

There are a number of reasons to work to preserve the future existence of humanity. These include the value of the vast number of future human lives, the worth of continuing key cultural, scientific and artistic projects that span long periods of time, and the cost-effectiveness of mitigating disaster.

We assume that the continuation of large and advanced civilizations is more valuable than the persistence of only a handful of subsistence societies.

A pandemic or perhaps more probably multiple pandemics occurring together poses a risk to the existence of humanity. The WHO hypothesizes about ‘Disease X’, a disease about which we yet know nothing, and which could emerge from increasingly accessible biotechnological manipulations. Such an event(s) would fall under Nick Bostrom’s concept of ‘moderately easy bio doom’, where a small team of researchers in a small laboratory, working for a year or so, might engineer a catastrophic biological threat. This would only need to occur once to threaten the entire human species.

The probability of such an event is unknown but is likely to be non-zero. A survey of experts in 2008 concluded on the basis of expert opinion that an existential threat level pandemic has a 2% chance of occurring by the year 2100. Two percent times a 10 billion population, means we are looking at an expected human life loss of 200 million people. Reducing the probability of such an event even by half a percent would be hugely worthwhile, particularly if it can be done with existing resources, and within existing governance structures.

One way to mitigate the risk of a pandemic killing all humans, is to designate some location as a refuge (just as with nuclear fall-out shelters in the cold war), which is quarantined (i.e. no one goes in or out) as soon as the triggering event is observed. Land borders are easily crossed by those infected with a disease, but islands have escaped pandemics in the past.

Key features required by such an island refuge are: a sufficient population, skills, and resources to preserve technological culture and rebuild society, and self-sufficiency and resilience to ride out the crisis.

We set about trying to identify the most likely locations (at the level of island nations) that satisfy these criteria. To do this we formalized a 9-point scoring tool covering: population size, number of visitors, distance from nearest landmass, risk of natural hazards, infrastructure and resources (using GDP per capita as a proxy), energy and food self-sufficiency, political stability and social capital.

When scored on our 0-1 metric, Australia, then New Zealand, then Iceland appear to be the most promising island refuges.

A lot of work yet remains to be done. Perhaps there are better interventions to protect the world from an existential level pandemic? Perhaps some other factor(s) is/are critically important to the success of an island refuge that we have not yet included (strong military, the presence of universities or server farms for the world’s knowledge, or some other factor)? How long could the optimal islands hold out? Is there some threshold of population that is essential to ensure that key skills such as nuclear physics or neurosurgery are preserved? We now encourage discussion and debate around these (and many other) issues.

AI, Employment and Ethics

Screen Shot 2019-09-26 at 12.01.33 AM

In this post I aim to describe some of the ethical issues around the use of algorithms to make or assist decisions in recruitment and for managing gig employment.

When discussing ethics we are trying to deduce the right thing to do (not the permissible thing, or the profitable thing, or the legal thing).

AI in recruitment

Consider Usha, who is a software engineer specialising in machine learning. Let’s imagine for the purposes of example, that she is in fact the most qualified and experienced person in the applicant pool for an advertised position, and would in fact perform the best in the role out of the entire applicant pool. In her application:

  • She uses detectably ‘female’ language in resume
  • She notes she didn’t start coding until the age of 18
  • She was the founding organiser of LGBTQ on campus

She also has a non-Western name and her dark skin tone made it difficult for an AI system to register her affect during a recorded video interview with a chat bot.

Faced with this data, an AI recruitment algorithm screened her out. She doesn’t get the job. She didn’t even get an face to face interview. Given the circumstances, many of us might think this was wrong.

Perhaps it is wrong because some principles such as fairness, or like treatment of like, or equality of opportunity have been transgressed. Overall, an injustice seems to have occurred.

Algorithmic Injustice

In his book Future Politics, Jaime Susskind lays out the various was in which an algorithm could lead to unjust outcomes.

  • Data-based injustice: where problematic, biased or incomplete data leads the algorithm to decide unfairly
  • Rule-based injustice
    • Overt: the algorithm contains explicit rules discriminating against some people, e.g. discriminating against people on the basis of sexual orientation.
    • Implicit: the algorithm discriminates systematically against some kinds of people due to correlations in the data, e.g. discriminating against those who didn’t start learning to code until after the age of 18 might discriminate against women due to social and cultural norms.
  • The neutrality fallacy: equal treatment for all people can propagate entrenched bias and injustice in our institutions and society.

Susskind notes that most algorithmic injustice can be traced back to actions or omissions of people.

Human Rights

Another way of formulating algorithmic ethics is in terms of human rights. In this case, rather than look to the outcome of a process to decide whether it was just or not, we can look to the process itself, and ask whether the applicant’s human rights have been respected.

In a paper titled, “Artificial Intelligence & Human Rights: Opportunities & Risks” The Berkman Klein Centre for Internet and Society at Harvard concludes that the following rights could be transgressed by the use of algorithms for recruitment. The rights to:

  • Freedom from discrimination
  • Privacy
  • Freedom of opinion, expression and information
  • Peaceful assembly and association
  • Desirable work

But it might be that case that ongoing ethical discourse could lead us to new rights in the age of AI, perhaps:

  • The right to not be measured or manipulated?
  • The right to human interaction?

Ethical Systems

The foundations for reasoning about rights transgressions or whether outcomes are just or unjust are found in ethical systems. Such systems have been constructed and debated by philosophers for centuries.

The Institute of Electrical and Electronic Engineers (IEEE) recognises this and in their (300 page!) report ‘Ethically Aligned Design’, they identify and describe a number of Western and non-Western ethical systems that might underpin considerations of algorithmic ethics. The IEEE notes that ethical systems lead us to general principles and general principles define imperatives. The IEEE list and explain eight general principles for ethically aligned design.

One example of an ethical system is what might broadly be considered the ‘consequentialist’ system, which determines right and wrong according to consequences. A popular version of this approach is utilitarianism, the ethical approach that seeks to maximise happiness. As an example, under utilitarianism, affirmative action can be good for society as a whole, enriching the experience of college students and enhancing representation in public institutions and making everyone happier in the end. This approach tends to ensure that we act ‘for the greatest good’.

Another example of an ethical system is deontology or ‘rules based’ ethics. Kantianism is a version of deontology, which argues that ethical imperatives come from within us as human beings, and the right thing to do boils down to ensuring that we treat all people with dignity, ensuring that they are never a mere means to an end but an end in themselves. This approach tends to lead to the formulations of right and duties. For example, it would be wrong to force someone to work without pay (slavery) because this fails to respect their freedom, autonomy, humanity and ultimately dignity, irrespective of the outcomes.

In their report, where the IEEE deduces their general principles of ethically aligned design from a foundation of these ethical systems, the authors note that, “the uncritical use of AI in the workplace, and its impact on employee-employer relations, is of utmost concern due to the high chance of error and biased outcome.”

The IEEE approach is not the only published declaration of ethical principles relevant to algorithmic decision making. The Berkman Klein Institute has catalogued and compared a number of these from public and private institutions.

The Gig Economy

Let’s turn now to gig work. Think of Rita, she is a gig worker working for a hypothetical home cleaning business that operates much like Uber. Rita’s work is monitored by GPS ensuring she takes the most direct route to each job, she’s not sure whether the company tracks her when she’s not working. Only time spent cleaning each house is paid, and the algorithm keeps very tight tabs on her activities. Rita gets warning notifications if she deviates from the prescribed route, such as when she needs to pick her son up from school and drop him to the babysitter. She gets ratings from clients, but one woman, an historical ethnic rival, always rates her low even when she does a good job, the algorithm warns her that it’s her last chance to do better. Rita stresses about the algorithm, feels constantly anxious and enters a depression. She misses work, has no sick pay to draw upon, and spirals downward.

We may conceive of such algorithms as ‘mental whips’ and feel concerned that when acting punitively they may be taking data out of context. Furthermore, the ethically appropriate response from the algorithm to, say, an older worker who falls ill might well be different from that to a wayward youth who slacks off. Justice may not be served by equal treatment.

Phoebe Moore has noted “[such] human resource tool[s] could expose workers to heightened structural, physical and psychosocial risks and stress.” – this is worse if workers feel disempowered.

Surveillance and the Panopticon

Many of the issues around gig management algorithms boil down to issues of surveillance.

Historic surveillance had limitations (e.g. a private detective could only investigate one employee at a time). However, with technological advance we can consider surveillance taken to its purest extreme. This is the situation Jeremy Bentham imagined with his panopticon. A perfect surveillance arrangement where one guard could observe all prisoners in a prison (or workers in a factory for that matter) at all times, without the guard being seen themselves. As soon as workers know this is the situation their behaviour changes. When a machine is surveilling people, people serve the machine, rather than machines serving people.

The panopticon is problematic for a number of reasons. Firstly, there is an unfounded assumption of innate shirking. There may be no right to disconnect (especially if the employer performs 24/7 surveillance of social media).

As with Rita there are risks that surveillance data can be taken out of context. We also know that the greater the surveillance, the greater the human demands for sanctions on apparent transgressions.

Finally, the system lacks a counter measure of ‘equiveillance’, which would allow the working individual to construct their own case from evidence they gather themselves, rather than merely having access to surveillance data that could possibly incriminate them.

Ethically we must ask, who is this situation benefitting? Employment should be a reciprocal arrangement of benefit. But with panopticon-like management of workers, it seems that some interests are held above those of others. Dignity may not be respected and workers can become unhappy. It could be argued that Rita is not being treated as an end in herself, but only as a mere means.

It’s true that Rita chose to work for the platform, and by choosing surveillance, has willingly forgone privacy. But perhaps she shouldn’t be allowed to. This is because privacy has group level benefits. A lack of privacy suppresses critical thought and critical thought is necessary to form alliances and hold those that exploit workers to account.

As a society we are presently making a big deal about consumer privacy, but what about employee privacy and protections? Ethics demands that we examine these discrepancies.

We might want to ensure that humans don’t become a resource for machines, where the power relationship is reversed, where human behavior (like Rita’s) is triggered by machine activity rather than the other way around. The risk is not that robots will take our jobs, we will become the robots living ultra-efficient but dehumanized lives

Ghost Work author Mary Gray says, “[one] problem is that the [algorithmic gig] work conditions don’t recognize how important the person is to that process. It diminishes their work and really creates work conditions that are unsustainable.” This argument contains both consequentialist and deontological points against overzealous algorithmic management of people.

Is there a duty to use algorithms?

I’ve called into question some of the possible uses of algorithms in recruitment and managing the gig economy. Potential injustice seems to lay in wait everywhere in bad data, implicitly unjust rules, and even neutral rules. But when are algorithms justified? What if customer satisfaction really is ‘up 13%’? Is this an argument for preserving the greatest happiness at the expense of a few workers? Or perhaps techniques for ‘ethically aligned design’ could lead to systems that overcome the ‘discriminatory intent’ in people and also enhance justice (dignity) in the process.

“We can tweak data and algorithms until we can remove the bias. We can’t do that with a human being,” – Frida Polli, CEO Pymetrics.

However, the duty to respect human dignity may require some limitations on the functions and capability of AI in recruitment and the management of gig work. We need to examine what limitations.

Australia’s Chief Scientist, Dr Alan Finkel, has proposed the ‘Turing Certificate’, a recognised mark for consumer technologies that would indicate whether the technology adheres to certain ethical standards. This discussion should be ongoing.

Finally, the irony that we implement oversight and regulatory force to combat the use of surveillance and algorithmic force is not lost on me…

Guyon, we are the problem, not Pharmac

cash pills.jpg

Guyon Espiner has written two pieces in the last week about Pharmac, New Zealand’s medicines buying agency. They can be found here and here.

Both pieces read as attacks. Claiming that the ‘secret’ black box processes Pharmac uses to justify funding for some medicines and not for others is causing people to die.

I do not work for Pharmac, and beyond being a citizen of New Zealand, I have no vested interest in how Pharmac operates. These attacks are unfair and completely miss the point. By singling out Pharmac as the bogey, those of us, yes all of us, who are actually the problem, get away free.

Is there even a problem?

Pharmac’s processes are consistent with international best practice in health funding prioritisation and their objectively determined ‘incremental cost-effectiveness ratios’ or ‘ICERs’ are in line with how health researchers around the world determine value for money. This is what produces the ‘quality adjusted life years per one million dollars’ benchmark that Pharmac bases most of its decisions upon. There is provision for case-by-case exception as well. There is nothing untoward here. And the results of these calculations need to be kept secret for commercial reasons in many cases.

The result of this process? Well the life expectancy of New Zealanders is about 82 years. In the USA, where every drug you can imagine is available, it is 79 years (and falling). So the New Zealand health system is doing something very right (and this includes prevention of disease, and healthy lifestyle promotion, which are all part of the health budget).

So where does the problem (if indeed there is one) lie? It would lie with the amount of money Pharmac is allocated. To repeat, Pharmac is doing the very best it can with the budget it has. So any finger-pointing needs to be directed at David Clark the Minister of Health (who allocates health funding received) and Grant Robertson, the Minister of Finance (who allocates funding to health). These Ministers in turn must direct the question to the New Zealand public. Political leaders need to offer realistic explanations of the options, without spin, to the public.

How much money should Pharmac get?

So how much money should Pharmac get? Well that depends. It depends on our (the New Zealand public’s) values and our preferences. If we value the health of those dying of cancer, and we prefer that expensive life-extending medicine be funded, then that is a completely reasonable position to take. But if these values and preferences mean that the budget for Pharmac must rise, then it is not enough to merely complain and call Pharmac unfair. That is the cowards way out.

We must propose a preferred policy for resource allocation. If we prefer that money is taken out of education and used to buy cancer drugs, then we should say that. If we prefer that Superannuation is reduced or the age of eligibility is raised, then we should say that. If we prefer that road safety initiatives are scrapped (and the road toll rises) then we should say that. If we prefer that income tax is raised, then we should say that. If we prefer to pay more GST, then we should say that. But we are silent.

If we want the government to increase funding for Pharmac, then we need to state unequivocally what we will trade and be specific. Perhaps it is best that we raise the age of entitlement for NZ Superannuation to 67 or 68. This could be a small price to pay so that those in desperate need can access medicines. Or perhaps we forgo special education support for children with learning difficulties, or perhaps we’d rather have less disposable income and simply raise taxes.

Funding of medicines is up to us

We need to vocalise our priorities and elect a government consistent with that prioritisation. Alternatively the government should offer us clear choices, and respect our decisions. It is up to us to determine how much Pharmac can ultimately spend on medicines. We should not be attacking the organisation that actually drives medicine prices down, and gets us the very best deal possible on hundreds of life-saving products.

All of the above are viable solutions to the ‘Pharmac problem’ but this just shows that Pharmac is not the problem, indeed, Pharmac is the solution to the constraints imposed. The resourcing problem is us and our unwillingness to put forward suggestions for managing the opportunity cost of buying more medicines. Given that attack politics is not constructive, I’ve previously suggested some policies relevant to Pharmac, including factors for consideration here, why we shouldn’t fund some expensive medicines here and I’ve also suggested ways to decrease the cost-burden of cancer in New Zealand here. We need solutions, not criticisms.

It doesn’t matter where you draw the value for money line, there will always be medications that sit above it. If Pharmac’s budget was doubled, there would still be patients who in theory would derive benefit from expensive but unfunded medications. The problem of prioritisation will persist. But if we uncap the Pharmac budget, then other budgets must necessarily have sinking lids.

What will we agree to forgo?

So think of this, perhaps we should all forgo some superannuation to fund a $75,000 drug that extends a cancer patient’s life by 3 months. Or perhaps NZ Super should be means tested. Those might actually be morally correct things to do. But again, the crux of the problem then isn’t Pharmac, it is us. Whenever the government suggests raising the age of NZ Super we all scream foul. Whenever the government suggests increasing GST we all scream foul. Whenever the government suggests cutting services, we all scream foul. In that case, it is not Pharmac who are morally bankrupt, it is us.

Overall, we need to vocalise the solution not the problem. Investigative reporting needs to present this wider societal and political context and not merely act as an advocate for a few.

A Strategy for Future Employment Wellbeing in the face of AI and Digital Transformation

Image result for humans and robots

AI, digital technology, and the future of work

Over recent years a number of concerns about the future of work have been raised. Many concerns focus upon the ‘robots will take our jobs’ slogan. Commentators representing technology firms tend to disagree and argue that many more jobs will be created.

Both sides are right, and we need a strategy to manage the transition to a world of employment dominated by artificial intelligence and digital technology.

The Issues surrounding ‘AI’ and the Future of Employment

  • The problem is wider than just ‘AI will take our jobs’, the issue concerns technology in general. Automation does not require AI, merely technology.
  • Almost certainly many new jobs will be created as other jobs and aspects of jobs become automated, but the key problem will be the mismatch between the needs of employers and the skills of workers.
  • The estimated cost to re-skill workers in the US is $34 billion (simply scaling this to the New Zealand population size indicates an immediate $500 million investment is needed).
  • Economic growth requires workers to have jobs (in part because profits accruing to those that own capital tend to accumulate rather than re-enter the economy).
  • The developing world will suffer more from automation as it relies on a disproportionate number of manufacturing jobs.
  • Whole industries will vanish, so it will be irrelevant whether some are automated or not.
  • Many countries are showing slowing of population growth (and the transition to non-sustaining populations). This will have interactive effects with jobs and employment.
  • Our present work paradigm links work to income, status and wellbeing. We don’t want to forgo any of these without an alternative strategy.
  • Unemployment can lead to stigma and shame. These social ills go beyond loss of income and aren’t easily substituted.
  • The value of work goes beyond income, and we don’t want to forgo this value, if we do then even with an abundance of wealth the future is inherently diminished.
  • Immigration will not be a sustainable solution in the face of global demand for skilled workers.

A Strategy for the Future

To deal with the concerns listed above, and a host of other associated concerns, we will need a strategy to manage the phasing out of some industries, redundancies as jobs and aspects of jobs are automated, and re-skilling of the workforce to allow transition to growth sectors.

We will also need to care for those workers between jobs and those unable to transition to new kinds of work.

We may further need to:

  • Weaken the connection between work and income: UBI has been suggested as a possible strategy, this may require new ways of taxing capital, and new conceptions of what constitutes capital.
  • Sever the connection between work and status: we ought to better recognize goodness, kindness, public spiritedness, charity, sustainability and similar traits. By recognizing that many capitalist practices degrade our environment or exploit psychological weaknesses for profit, we can start to recognize that some high ‘status’ individuals are actually anti-community.
  • To encourage innovation and novelty we may need to recognize that just as access to water, sunlight, wind and tides is a right for the entrepreneur, access to data/information, processing power, intelligence, and so on are rights in the digital age.
  • We need to free up knowledge/IP for common use so innovators can stand on the shoulders of giants and devise solutions to pressing problems.
  • We need to curtail the entrenchment of power and growth of inequality because we will need a more equal population for markets to function as intended.
  • We may want to protect the right to work (and therefore be productive in the programmes described next).
  • Beyond this right, we should move to an obligations based economy, rather than a rights based one. Where companies are obliged to sustain a quota of jobs based on turnover, or fund government programmes of socially valuable jobs/stipends. These may include funding: environmental care programmes, elder and childcare, teaching, the arts, sports, etc).
  • If it really is true that as many jobs will be created as are lost, then such programmes will never be needed, so if businesses selling the ‘plethora of future jobs’ dream truly believe their promises, then they will have no qualms about supporting this regulation and consumer confidence will grow.
  • If total job numbers do decrease we may need to reduce the working week (and treat work as a scarce resource), perhaps in conjunction with raising the age of superannuation as populations age, allowing us to enjoy leisure time throughout life, rather than all in retirement.
  • We will need to provide training and education to aggressively upskill workers. Finland has already taken some steps towards basic AI training for 1% of its population. New Zealand will also need to grow AI (digital) talent.
  • What businesses want now will change rapidly so the focus should be on building fundamental capability from the ground up. This will require us to Teach AI and digital skills in schools and to the unemployed.
  • To encourage retraining we will need to forgive student debt (and provide low interest loans and free schemes to redo education).
  • We need tax breaks for the self-employed who undertake relevant courses part time.
  • We need to move to a job market based on skills like interpreting outputs and data, critical thinking, and evaluation thinking (so we can productively work alongside robots).
  • We may need a managed decline of population if jobs really do become scarce. This has the added bonus of solving all kinds of other problems, such as climate change.
  • We may need urgent research into the economics of degrowth (as opposed to recession, see links to commentary on this approach below).
  • We must ensure that worker protections are a trampoline not a safety net. This should entail short term, high investment in those who lose jobs
  • We should consider paying users of digital applications for their attention (in an ads based economy).
  • We must recognize that experimentation will be required and we must move past conservatism (remembering that stasis helps the oppressors, never the oppressed).
  • We need to plan for all of the concerns at the start of this blog, so IF they occur we can respond with the plan we have already devised.

Overall we need to:

– Have ethical debate about the future. We need to decide what we want the future to look like? What would constitute wellbeing? Is it profit? Or work? Is it exponential growth? Or de-growth?

Further reading: The de-growth economy

See Jason Hickel’s blog (a global inequality researcher and Fellow of the Royal Society of Arts)

https://www.jasonhickel.org/blog/2018/10/27/degrowth-a-call-for-radical-abundance

https://www.jasonhickel.org/blog/2018/11/1/a-simple-solution-to-the-growthdegrowth-debate

Surveillance Capitalism: Ought we permit behavioural data to exist?

Related image

I want to draw attention to an interview in the Guardian I just read about surveillance capitalism and the allegedly illegitimate conquest of personal data by big digital firms.

The subject of the interview is Harvard Professor Shoshana Zuboff who is author of a book to be released end of January 2019: The Age of Surveillance Capitalism

I’ve previously cited one of Prof Zuboff’s earlier papers in an article about ‘Rapid developments in artificial intelligence: how might the New Zealand government respond’.

I found the Guardian interview so compelling I have already ordered her new book.

The book promises to provide a robust intellectual framework for deconstructing the power of the giant tech firms on the basis of illegitimate conquest of ‘digital natives’ (a very clever metaphor).

This argument could be the forceful rejection of rampant data harvesting that many who oppose the unbridled power of big tech have been seeking.

Here are some key points from the article and interview:

Surveillance capitalism:

  • Works by providing free services and enables the providers of those services to gather a phenomenal amount of information about the behaviour of users, frequently without their explicit consent.
  • Claims without care for opposing views that human experience is a free raw material that the surveillance capitalist may translate into behavioural data.
  • Feeds such data into machine intelligence driven manufacturing processes and fabricates this into prediction products.
  • These prediction products are traded in a behavioural futures market.

Why is this a problem?

  • The initial appropriation of users’ behavioural data is arrogant. Data are viewed as a free resource, there for the taking.
  • The key digital technologies are opaque by design and sustain user ignorance.
  • The emergence of these tech giants occurred in a largely law-free context.
  • The combination of state surveillance and capitalist surveillance is separating citizens into two groups: the watchers and the watched.
  • This is important, because as Jamie Susskind notes in his excellent 2018 book Future Politics, the imbalance in political power has historically been mitigated by the strong being scrutinised publicly and the weak enjoying personal privacy. Upset that dynamic and power shifts.
  • Asymmetries of knowledge translate into asymmetries of power.
  • We may have some oversight of state surveillance, but we currently have almost no regulatory oversight of surveillance capitalism.

Finally, the key concept that leaped from this interview for me is the following:

  • “The idea of ‘data ownership’ is often championed as a solution. But what is the point of owning data that should not exist in the first place?”

Zuboff argues that what we have witnessed over the last two decades is a conquest by declaration. A unilateral decree that some entity may harvest and use a resource freely and without limit.

This is colonial imperialism at its most ruthless and it needs limits.

 

‘Healthy News’ labelling – a discussion at Ben Reid’s request

Screen Shot 2018-11-22 at 9.57.06 pm

In an AI Forum NZ newsletter yesterday Forum Executive Director Ben Reid said the following:

“Last week, I was part of the delegation at the annual Partnership on AI (PAI) annual all-partners meeting in San Francisco…

…One of the highlights for me was an open and frank speaker panel featuring Kiwi Facebook executive Vaughan Smith which discussed the emerging effects of AI on media and democracy, including the ongoing fallout from Cambridge Analytica and US election manipulation scandals.  It was instructive to understand the challenges that social media giants face attempting to automate moderation of literally billions of posts per day.  Building algorithmic systems which can keep up with the huge diversity of cultures and languages across the world and effectively automate ethical value based decisions on a global scale is a data science and AI challenge in itself! (Discuss…)”

Interesting point Ben, and since you invited us to ‘discuss’, I don’t mind if I do.

I’ve been doing some thinking on this lately, and for me the issue of information pollution is the #1 priority in the world today, because clean information underpins every decision we make, about every issue, and information pollution is destabilizing, harbors risk and threatens the institutions of democracy.

I can’t speak to the technical issues around monitoring billions of posts (and I prefer the word ‘monitoring’ to ‘moderating’). But I can speak to the theory behind the problems.

One idea I’d like to see pursued is the idea of a ‘Healthy News’ content labeling system. This would target the items with the largest number of views/likes/shares, so it would be top-down and not necessary for a post to be labeled until it’s risen to prominence (thereby reducing scale of the solution).

The labeling system needs to be:

  1. Extremely simple, yet articulate enough to quickly pass on the relevant information at a glance and further information at a hover
  2. Grounded in deep technical, proven theory about the dynamics of human information transmission

These dynamics (to greatly simplify the theory that I wrote my Masters and PhD on) boil down to:

  1. Features of the source of the information
  2. Features of the content of the information
  3. Features of frequency of the information

Human psychology has evolved and then developed to attend to these features. This suite of theory (and all it’s nuances) explains why we believe what we do and why certain information is passed on and other information is not.

Different pieces of information (posts, news items, conspiracy theories, fashions, formulae, etc) ‘succeed’ because they have the right combination of these three factors.

All three factors can be traced and quantified in various ways. For example:

* There are programs that allow you to find the original source of a Twitter post, or the first mention of an exact phrase. The source can be categorized as a major thought hub, or a leading expert, or an isolated individual with links to groups that incite violence.

* There are applications that can deduce the emotional tone of the content of a piece of information (and many other content features like the truth of facts through automated real-time fact checking).

* There are algorithms for tallying shares/likes/retweets/views/downloads to establish frequency distributions.

Using these basic features and a host of other metadata we can categorize information according to its source, content and frequency.

We can then use a system of simple colour coded icons to convey that information.

A red robot icon for example combined with a red angry face icon might mean the source of the item is likely a twitter bot, and the emotional tone of the content is aggressive or inciteful.

A green tick icon combined with an orange antenna icon might mean this item passes fact-checking software, but comes from an infrequently liked source.

These are just examples I’m dreaming up. What we need is a research programme that takes the vast literature of cultural evolution theory and human information transmission dynamics theory, and cognitive bias theory and deduces what variables that drive the theory can we extract from network data and what these can tell us about information and its dynamics.

We then need to implement this labelling, and provide education around information transmission dynamics in schools, on websites, everywhere, until people understand how and why this stuff flows the way it does. It will take time, but just as we learned to interpret all the vast array of icons we come across daily, we can grasp this too, and the implications particular icon sets have.

Additionally, hovering over the icons provides additional information about the information transmission dynamics and why this item has spread as it has.

This is a quick first pass at this ‘Healthy News’ labelling idea. But joint work between technical digital media experts and those who understand the science of cultural evolution and cognitive biases could start to deduce how and why this all happens and how we can warn users about dangerous, false, hysterical, manipulative, and malicious content.

I found the following article particularly interesting: https://medium.com/hci-design-at-uw/information-wars-a-window-into-the-alternative-media-ecosystem-a1347f32fd8f

In it they generate this figure from network information harvested:

Screen Shot 2018-11-22 at 9.57.06 pm.png

The caption reads: “we generated a graph where nodes were Internet domains (extracted from URL links in the tweets). In this graph, nodes are sized by the overall number of tweets that linked to that domain and an edge exists between two nodes if the same Twitter account posted one tweet citing one domain and another tweet citing the other. After some trimming (removing domains such as social media sites and URL shorteners that are connected to everything), we ended up with the graph you see in Figure 1. We then used the graph to explore the media ecosystem through which the production of alternative narratives takes place.”

It is this kind of work and the software that drives it, which needs to be combined in the tool for ‘Healthy News’ labeling, ideally using a watermarking system.

The next step will be to fight off those who attempt to manipulate the new system, but that’s another story…

End of discussion.

Keeping our eye on the laser phish: Information pollution, risk, and global priorities

Untitled

  • This post is on what I consider to be the most pressing problem in the world today.
  • I lay out the theory underpinning information pollution, the significance and trajectory of the problem, and propose solutions (see bullets at the end). 
  • I encourage you to persist in reading this post, so that we can all continue this important conversation. 

Introduction

Technological innovation and growth of a certain kind are good, but I want to explain why risk mitigation should be more of a priority for the world in 2018. Without appropriate risk mitigation, we could upend the cultural, economic and moral progress we have enjoyed over the last half-century and miss out on future benefits.

One particular risk looms large, we must urgently address the threat of information pollution and an ‘infopocalypse’. Cleaning up the information environment will require substantial resources just like mitigation of climate change. The threat of an information catastrophe is more pressing than climate change and has the potential to subvert our response to climate change (and other catastrophic threats).

In a previous post, I responded to the AI Forum NZ’s 2018 research report (see ‘The Good the Bad and the Ugly’). I mentioned Eliezer Yudkowsky’s notion of an AI fire alarm. Yudkowsky was writing about artificial general intelligence, however, it’s now apparent that even with our present rudimentary digital technologies the risks are upon us. ‘A reality-distorting information apocalypse is not only plausible, but close at hand’ (Warzel 2018). The fire alarm is already ringing…

Technology is generally good

Technology has been, on average, very good for humanity. There is almost no doubt that people alive today have lives better than they otherwise would have because of technology. With few exceptions, perhaps including mustard gas, spam and phishing scams, arguably nuclear weapons, and other similar examples, technology has improved our lives.

We live longer healthier lives, are able to communicate with distant friends more easily, and travel to places or consume art we otherwise could not have, all because of technology.

Technological advance has a very good track record, and ought to be encouraged. Economic growth has in part driven this technological progress, and economic growth facilitates improvements in wellbeing by proxy, through technology.

Again, there are important exceptions, for example where there is growth of harmful industries that cause damage through externalities such as pollution, or through products that make lives worse, such as tobacco or certain uses of thalidomide for example.

The Twentieth Century however, with its rapid growth, technological advance, relative peace, and moral progress was probably the greatest period of advance in human wellbeing the world has experienced.

Responsible, sustainable growth is good

The key is to develop technology, whilst avoiding technologies that make lives worse, and to grow while avoiding threats to sustainability and harm to individuals.

Ideally the system should be stable, because the impacts of technology and growth compound and accumulate. If instability causes interruption to the processes, then future value is forgone, and the area under the future wellbeing curve is less than it otherwise would have been.

Economist Tyler Cowen explains at length in his book Stubborn Attachments. Just as opening a superannuation account too late in life can forgo a substantial proportion of potential future wealth, delayed technological development and growth can forgo substantial wellbeing improvements for future people.

Imagine if the Dark Ages had lasted an extra 50 years, we would presently be without the internet, mobile phones, coronary angiography and affordable air travel.

To reiterate, stability of the system underpins the magnitude of future benefit. There are however a number of threats to the stability of the system. These include existential threats (which would eliminate the system) and catastrophic risks (which would set the system back, and so irrevocably forgo future value creation).

Risk mitigation is essential

The existential threats include (but are not limited to): nuclear war (if more than a few hundred warheads are detonated), asteroid strikes, runaway climate change (the hothouse earth scenario), systematic extermination by autonomous weapons run amok, an engineered bioweapon multistrain pandemic, geoengineering experiment gone wrong, and assorted other significant threats.

The merely catastrophic risks include: climate change short of hothouse earth, war or terrorism short of a few hundred nuclear warhead detonations, massive civil unrest, pandemic influenza, system collapse due to digital terror or malfunction, and so on.

There is general consensus that the threat of catastrophic risk is growing, largely because of technological advance (greenhouse gases, CRISPR, power to size warhead improvements, dependence on just in time logistics…). Even a 0.1% risk per year, across ten catastrophic threats, makes one of them almost inevitable this century. We need to make sure the system we are growing is not only robust against risk, but is antifragile, and grows to strengthen in response to less severe perturbations.

Currently it does not.

Although we want to direct many of our resources toward technological growth and development, we also need to invest a substantial portion in ensuring that we do not suffer major setbacks as a result of these foreseeable risks.

We need equal measures of excited innovation and cautious pragmatic patience. We need to get things right, because future value (and even the existence of a future) depends on it.

We must rationally focus and prioritize

There are a range of threats and risks to individuals and society. Large threats and risks can emerge at different times and grow at different rates. Our response to threats and risks needs to be prioritized by the imminence of the threat, and the magnitude of its effects.

It is a constant battle to ensure adequate housing, welfare, healthcare, and education. But these problems, though somewhat important (pressing and a little impactful), and deserving of a decent amount of focus, are relatively trivial compared with the large risks to society and wellbeing.

Climate change is moderately imminent (significant temperature rises over the next decades) and moderately impactful (it will cause severe disruption and loss of life, but it is unlikely to wipe us out). A major asteroid strike is not imminent (assuming we are tracking most of the massive near earth objects), but could be hugely impactful (causing human extinction).

The Infopocalypse

I argue here that the risks associated with emerging information technologies are seriously imminent, and moderately impactful. This means that we ought to deal with them as a higher priority and with at least as much effort as our (woefully inadequate) efforts to mitigate climate change.

To be clear, climate change absolutely must be dealt with in order to maximize future value, and the latest IPCC report is terrifying. If we do not address it with sufficiently radical measures then the ensuing drought, extreme weather, sea level rises, famine, migration, civil unrest, disease, an so on, will undermine the rate of technological development and growth, and we will forgo future value as a result. But the same argument applies to the issue of information pollution. First I will explain some background.

Untitled1

Human information sharing and cognitive bias

Humanity has shown great progress and innovation in storing and packaging information. Since the first cave artist scratched an image on the wall of a cave, we have continued to develop communication and information technologies with greater and greater power. Written symbols solved the problem of accounting in complex agricultural communities, the printing press enabled the dissemination of information; radio, television, the internet, and mobile phones have all provided useful and life enhancing tools.

Humans are a cultural species. This means that we share information and learn things from each other. We also evolve beneficial institutions. Our beliefs, habits and formal routines are selected and honed because they are successful. But the quirks of evolution mean that it is not only ideas and institutions that are good for humanity that arise. We have a tendency for SNAFUs.

We employ a range of different strategies for obtaining relevant and useful information. We can learn information ourselves through a trial and error process, or we can learn it from other people.

Generally, information passed from one generation to the next, parent to child (vertical transmission), is likely to be adaptive information that is useful for navigating the problems the world poses. This is because natural selection has instilled in parents a psychological interest in preparing their children to survive and these same parents, holders of the information, are indeed alive.

Information that we glean from other sources such as our contemporaries (horizontal transmission) does not necessarily track relevant or real problems in the environment, nor necessarily provide us with useful ways to solve these problems. Think of the used car salesperson explaining to you that you really do need that all-leather interior. Think of Trump telling welfare beneficiaries that they’ll be better off without Medicare.

Furthermore, we cannot attend to all the information all the time, and we cannot verify all the information all the time. So we use evolved short cuts, useful heuristics that have obtained for us, throughout history and over evolutionary time, the most useful information there is. Such simple psychological rules as ‘copy talented and prestigious people’, or ‘do what everyone else is doing’, have generally served us well.

Until now…

There are many other nuances to this system of information transmission, such as the role of ‘oblique transmission’ e.g. from teachers, the role of group selection for fitness rather than individual selection, the role of the many other cognitive biases besides the prestige biased information copying and frequency-dependent copying just mentioned. And there is also the appeal of the content of the information itself, does it seem plausible, does it fit with what is already believed, have a highly emotive aspect, or is simple to remember?

The key point is that the large-scale dynamics of information transmission depend on these micro processes of content, source, and frequency assessment (among other processes) at the level of the individual.

All three of these key features can easily be manipulated, at scale, and with personalization, by existing and emerging information technologies.

Our usually well-functioning cognitive short-cuts can be hacked. The advertising and propaganda industries have realized this for a long time, but until now their methods were crude and blunt.

The necessity of true (environmentally tracking) information

An important feature of information transmission is that obtaining information imposes a cost on the individual. This cost can be significant due to the attention required, time and effort spent on trial and error, research, and so forth.

It is much cheaper to harvest information from others rather than obtain it yourself (think content producers vs content consumers). Individuals who merely harvest free information without aiding the production and verification of information are referred to in cultural and information evolution models as ‘freeriders’.

Freeriders do very well when environments are stable, and the information in the population tracks that environment, meaning that the information floating around is useful for succeeding in the environment.

However, when environments change, then strategies need to change. Existing information biases and learning strategies, favoured by evolution because, on average, they obtain good quality information, may no longer track the relevant features of the environment. These existing cognitive tools may no longer get us, on average, good information.

Continuing to use existing methods to obtain or verify information when the game has changed can lead individuals and groups to poor outcomes.

We are seeing this in the world today.

The environment for humanity has been changing rapidly and we now inhabit a world of social media platform communication, connectivity, and techniques for content production, which we are not used to as a species. Our cognitive biases, which guide us to trust particular kinds of information are not always well suited to this new environment, and our education systems are not imbuing our children with the right tools for understanding this novel system.

As such, the information we share is no longer tracking the problem space that it is meant to help us solve.

This is particularly problematic where ‘consume only (and do not verify)’ freeriders are rife, because then those that create content have disproportionate power. Those who create content with malice (defectors) have the upper hand.

The greater the gap between the content in the messages and the survival and wellbeing needs of the content consumers, the greater the risk of large scale harm and suffering across time.

If we don’t have true information, we die.

Maybe not today, maybe not tomorrow, but probabilistically and eventually.

Because fundamentally that is what brains are for, they are for tracking true features of the environment and responding to them in adaptive ways. The whole setup collapses if the input information is systematically false or deceptive and our evolved cognitive biases persist in believing it. The potential problem is immense. (For a fuller discussion see Chapter 2 of my Masters thesis here).

How information technology is a threat: laser phishing and reality apathy

Information has appeal due to its source, frequency or content. So how can current and emerging technological advances concoct a recipe for disaster?

We’ve already seen simple hacks and unsophisticated weaponizing of social media almost certainly influence global events for the worse, such as the US presidential election (Cambridge Analytica), the Brexit vote, the Rohingya genocide, suppression of women’s opinions in the Middle East, and many others. The Oxford Computational Propaganda Project catalogues these.

These simple hacks involve the use of human trolls, and some intelligent systems for information creation, testing and distribution. Common techniques involve bot armies to convey the illusion of frequency, thousands of versions of advertisements with reaction tracking to fine tune content, and the spread of fake news by ‘prestigious’ individuals and organizations. All these methods can be used to manipulate the way information flows through a population.

But this is just the tip of the iceberg.

In the above cases a user who is careful will be skeptical of much of the information presented by carefully comparing the messages received to the reality at large (though this involves effort). However, we are rapidly entering an era where reality at large will be manipulated.

Technology presently exists that can produce authentic sounding human audio, manipulate video seamlessly, remove or add individuals to images and video, create authentic looking video of individuals apparently saying things they did not say, and a wide range of other malicious manipulations.

A mainstay of these techniques is the use of generative adversarial networks (GANs), a kind of artificial intelligence that deploys machine learning to first categorize the world, then produce new content and refine that content until it is indistinguishable from the training dataset, and yet did not exist in the training dataset. Insofar as we believe video, audio and images document reality, GANs are starting to create reality.

Targeting this new ‘reality’ in the right ways can exploit the psychological biases, which we depend on in order to attend to what ought to be the most relevant and important information amid a sea of content.

We are all used to being phished these days. This is where an attempt is made to deceive us into acting in the interests of a hostile entity through the use of (often) an email or social media message. Phishing is often an attempt to obtain personal information, but can manifest as efforts to convince us to purchase products that are not in our interests.

The ‘Nigerian scams’ were some of the first such phishes, but techniques have advanced well beyond ‘Dear esteemed Sir or Madam…’

Convincing us of an alternate reality is the ultimate phish.

Laser phishing is the situation where the target is phished but the phish appears to be an authentic communication from a trusted source. Perhaps your ‘best friend’ messages you on social media, or your ‘boss’ instructs you to do something.

The message reads exactly as the genuine article, along with tone, colloquialisms and typical misspellings you’re used to the individual making. This is because machine learning techniques have profiled the ‘source’ of the phish and present an authentic seeming message. If this technique becomes advanced, scaled endlessly through automation, and frequently deployed, it will be necessary, but simply impossible, to verify the authenticity of every message and communication.

The mere existence of the technique (and others like it) will cast doubt on every piece of information you encounter every day.

But it gets worse.

Since GANs are starting to create authentic seeming video, we can imagine horror scenarios involving video of Putin or Trump or Kim Jong Un declaring war. I won’t dwell too much on these issues here, as I’ve previously posted on trust and authenticity, technology and society, freedom and AI, AI and human rights. Needless to say, things are getting much worse.

Part of the problem lies in the incentives that platform companies have for sustaining user engagement. We know that fake news spreads more widely than truth online. This tends to lead to promotion of sensationalist content and leaves the door wide open for malicious agents to leverage an attentive and psychologically profiled audience. The really big threat is when intelligent targeting (the automated laser phishing above) is combined with dangerous fake content.

These techniques (and many others) have not been widely deployed yet, and by ‘widely’ I mean that most of the digital content we encounter is not yet manipulated. But the productive potential of digital methods and the depth of insight about targets gleaned from shared user data is not bound by human limits.

We depend on the internet and digital content for almost everything we do or believe. Very soon more content will be fake than real. That is not an exaggeration. I’ll say it again, very soon more content will be fake than real. What will we make of that? Without true information we cannot track the world, we cannot progress.

A real possibility is that we come to question everything, even the true and useful things, and become apathetic toward reality by default.

That is the infopocalypse.

Tracking problems in our environment

It is critical for our long-term survival, success and wellbeing, that the information we obtain tracks the true challenges in our environment. If there is a mismatch between what we think is true and what really is true, then we will suffer as individuals and a species (think climate denial vs actual rising temperatures).

If we believe that certain kinds of beliefs and institutions are in our best long-term interests when they are not then we are screwed.

Bad information could lead to maladaptation and potentially to extinction. This is especially true if the processes that are leading us to believe the maladaptive information are impervious to change. There are a number of reasons why this might be so. The processes might be leveraging our cognitive biases, or they may be sustained by powerful automated entities or they may quell our desire for change through apparent reward.

There are many imaginable scenarios where the information we consume is almost all no good for us, it is ‘non-fitness tracking’ yet we lap it up anyway.

We’re seeing this endemically in the US at the moment. The very individuals who stand to lose the most from Trump’s health reform are the most vocal supporters of his policies. The very individuals who stand to gain most from a progressive agenda are sending pipe bombs in the mail.

Civil disorder and outright conflict are only (excuse the pun) a stones throw away.

This is the result of all the dynamics I’ve outlined above. Hard won rights, social progress, and stability are being eroded, and that will mean we forgo future value because if the world taught us anything in the 20th Century it’s that…

… peace is profitable.

If we can’t shake ourselves out of the trajectory we are on, then the trajectory is ‘evolutionarily stable’ to use Dawkins’ term from 1976. And to quote, ‘an evolutionarily stable strategy that leads to extinction… leads to extinction’.

This is not hyberbole, because as noted above, hothouse earth is an extinction possibility, nuclear war is an extinction possibility. If the rhetoric and emerging information manipulation techniques take us down one of these paths then that is our fate.

To reiterate, the threat of an infopocalypse is more pressing and more imminent than the threat of climate change, and we must address it immediately, with substantial resource investment, otherwise malicious content creating defectors will win.

As we advance technologically, we need to show restraint and patience and mitigate risks. This means a little research, policy and action, taken thoughtfully, rather than rushing to the precipice.

The battle against system perturbation and risk is an ongoing one, and many of the existing risks have not yet been satisfactorily mitigated. Nuclear war is a stand out contender for greatest as yet unmitigated threat (see my previous post on how we can keep nuclear weapons but eliminate the existential threat).

Ultimately, a measured approach will result in the greatest area under the future value curve.

So what should we do?

I feel like all the existing theory that I have outlined and linked to above is even more relevant today then when it was first published. I also feel like there are not enough people with broad generalist knowledge in these domains to see the big picture here. The threats are imminent, they are significant, and yet with few exceptions, they remain unseen.

We have the evolution theory, information dynamic theory, cognitive bias theory, and machine learning theory to understand and address these issues right now. But that fight needs resourcing and it needs to be communicated so a wider population understands the risks.

Solutions to this crisis, just like solutions to climate change will be multifaceted.

  • In the first instance we need more awareness that there is a problem. This will involve informing the public, technical debate, writing up horizon scans, and teaching in schools.
  • Children need to grow up with information literacy. I don’t mean just how to interpret a media text, or how to create digital content. I mean they need to learn how to distinguish real from fake, and how information spreads due to the system of psychological heuristics, network structure, frequency and source biases, and the content appeal of certain kinds of information. These are critical skills in a complex information environment and we have not yet evolved defenses against the current threats.
  • We need to harness metadata and network patterns to automatically watermark content and develop a ‘healthy content’ labelling system akin to healthy food labels, to inform consumers of how and why pieces of information have spread. We need to teach this labelling system widely. We need to fix fake news. (I’ve recently submitted a proposal to a philanthropically funded competition for research funding to contribute to exactly that project. And I have more ideas if others out there can help fund the research)
  • We need mechanisms, such as blockchain identity verification to subvert laser phishing.
  • We need to outlaw the impersonation of humans in text, image, audio or video.
  • We need to be vigilant to the technology of deception.
  • We need to consider the sources of our information and fight back with facts.
  • We need to reject the information polluting politics of populism.
  • We need to invest in cryptographic verification of images and audio.
  • We need to respect human rights whenever we deploy digital content.
  • We also need a local NZ summit on information pollution and reality apathy.

Summary

More needs to be done to ensure that activity at a local and global level is targeted rationally towards the most important issues and the most destabilizing risks. This means a rational calculus of the likely impact of various threats to society and the resources required for mitigation.

Looking at the kinds of issues that today ‘make the front page’ shows that this is clearly not happening at present (‘the Koru Club was full’ – I mean seriously!). And ironically the reasons for this are the very dynamics and processes of information appeal, dissemination and uptake that I’ve outlined above.

A significant amount is known about cultural informational ‘microevolutionary’ processes (both psychological and network-mediated) and it’s time we put this theory to work to solve our looming infopocalypse.

I am more than happy to speak pro bono or as a guest lecturer on these issues of catastrophic risk, the threat of digital content, or information evolution and cognitive biases.

If any organizations, think tanks, policy centers, or businesses wish to know more then please get in touch.

I am working on academic papers about digital content threat, catastrophic risk mitigation, and so on. However, the information threat is emerging faster than it can be published on.

Please fill out my contact form to get in touch.

 

Selected Further Reading:

Author’s related shorter blogs:

The problem of Trust and Authenticity

Technology and society

AI and human rights

AI Freedom and Democracy

Accessible journalism:

Helbing (2017) Will democracy survive big data and artificial intelligence

Warzel (2018) Fake news and an information apocalypse

Academic research:

Mesoudi (2017) Prospects for a science of cultural evolution

Creanza (2017) How culture evolves and why it matters

Acerbi (2016) A cultural evolution approach to digital media

Author’s article on AI Policy (2017) Rapid developments in AI

Author’s article on memetics (2008) The case for memes

Author’s MA thesis (2008) on Human Culture and Cognition