Optimism about the Future of Humanity: a conference on existential risks

In 1826 Mary Shelly crafted a vision of humanity’s end in ‘The Last Man’. Depicting a world that persists, indifferent to the demise of our species. The end came at the hands a pandemic, spread by the human technologies of trade and news.

Since the construction of nuclear weapons in 1945 humanity has wielded technological power of extreme destruction, and expert consensus is that the greatest threat to humans are humans themselves.

But given that we are the threat, there is also cause for much hope. Humans are self-reflexive and can change behaviour. Technology has raised the standard of living and human wellbeing worldwide, has provided the tools to escape the Covid-19 pandemic, and promises the foundation for a flourishing future.

Provided we govern and wield technology with appropriate wisdom.

The Existential Risk Observatory, founded in 2021 in the Netherlands, is the latest in a series of global institutions concerned for humanity’s future and with a mission to ensure a thriving global society immune from existential threats.

Driven by optimism for our collective future the Observatory convened a conference on existential risks and invited speakers from around the world.

I had the privilege of presenting my take on biological threats, drawing on research I’d undertaken in conjunction with Nick Wilson of the University of Otago, and others, prior to Covid-19, as well as lessons from New Zealand’s experience with Covid-19, and international research on biological threats.

You can watch my presentation by clicking this link (Session two, talk from 25:10, Q&A from 1:08:55).

Below, I’ve provided the full menu of talks at the conference, which includes:

  • artificial intelligence
  • climate change
  • nuclear weapons
  • biological threats
  • policy approaches

Existential Risk Observatory (Netherlands) Conference on Existential Risks

Session one (7 October 2021)

0:00                 Introduction to the Conference

17:45               Power Hour (general discussions of the conference’s themes)

1:20:45            Climate Change – Ingmar Rentzhog (Founder/CEO We Don’t Have Time)

2:47:34            Existential Risks – Simon Friederich (University of Groningen)

3:48:00            Artificial Intelligence – Roman Yampolskiy (Louisville University)

Session two (8 October 2021)

0:00                 Introduction to Session Two

25:10               Biological Risks – Matt Boyd (Adapt Research Ltd, New Zealand)

1:26:15            Policy – Rumtin Sepasspour (Cambridge Centre of Study for Existential Risk)

2:52:25            Nuclear Weapons – Susi Snyder (PAX, Nobel Peace Laureate)

3:56:29            Artificial Intelligence Policy – Claire Boine (Harvard & Future of Life Institute)

As Rumtin Sepasspour (Research Affiliate, Cambridge University) noted in his presentation, governments are key stakeholders in the quest for immunity from existential risk, particularly those that arise from accidental or deliberate use of technology. Governments should look at existential risks as a set to be analysed, prioritised and mitigated.

In our quest to understand, prevent, prepare and respond to existential threats every country should hold these meetings of diverse stakeholders to share knowledge and ideas for successfully navigating the period where our technological power outstrips our institutional wisdom.

A new report from the Secretary General of the United Nations ‘Our Common Agenda’, calls on nations to develop foresight and futures capability under an umbrella of coordinated global action.

An very good informed summary and discussion of the UN report can be read here.

%d bloggers like this: