WHAT’S THE WORST THAT COULD HAPPEN? EXISTENTIAL RISK AND EXTREME POLITICS
EAGxAustralia Conference, Effective Altruism Australia, Melbourne
Friday, 22 September 2023
I acknowledge the people of the Kulin Nations as traditional custodians of the land and pay my respects to their Elders past and present. I acknowledge any First Nations people and businesses represented here today. I commit myself to the implementation of the Uluru Statement from the Heart, which starts by voting Yes on October 14.
Much of what we focus on in politics centres on immediate challenges. This week, I’ve participated in discussions about competition policy and randomised trials, community-building and economic dynamism. These are important issues for Australia’s future.
But the EAGxAustralia conference provides an opportunity to think about existential risk – about dangers not only to our way of life, but to our lives themselves.
In a busy life, it’s easy to confuse the improbable with the impossible.
What would happen if you decided to cross the road without checking the traffic? Odds are that you’d survive unscathed. But do it enough times and you’re likely to come a cropper.
That’s where catastrophic risk comes in.
As a species, humanity is now playing with technological innovations that pose a small but real risk of ending our species. Tens of thousands of nuclear weapons pointed at major cities. Climate change that could lead to unstoppable feedback loops. Biotechnology that could allow the creation of deadly pathogens. Computer technology that could create a machine that is smarter than us and doesn’t share our goals.
As a teenager, I joined Palm Sunday anti-nuclear rallies. As an adult, I’ve been a strong advocate of climate change action. But when I entered parliament in 2010, the issue of existential risk didn’t loom large on my radar. My priority was people’s quality of life, not the end of life itself.
I’ve come to believe that catastrophic risk is a vital issue. In my new book What’s the Worst That Could Happen? Existential Risk and Extreme Politics (MIT Press), I quote the estimate of Oxford philosopher Toby Ord that the chance of a species-ending event in the next century is one in six. A one in six chance of going the way of dodoes and dinosaurs effectively means we are playing a game of Russian roulette with humanity’s future. Six chambers. One bullet. Even the most fool- hardy soldier usually finds an excuse not to play Russian roulette. And that’s when just their own life is at stake. In considering extinction risk, we’re contemplating not one death but rather the death of billions or possibly trillions of people—not to mention countless animals.
And that’s just the danger over the coming century. If we keep it up for another millennium, there’s a five in six chance that humans never make it to the year 3000. Like a person who crosses the road without checking for traffic, the odds are that you’ll eventually get hit.
That’s tragic for those who perish, and for those who would never get to experience life at all. We’ve got another billion years or so before the sun engulfs earth. That’s enough time for another 30 million generations of humans. Not bad for a species that’s only been around for about 10,000 generations so far. Far from being the stuff of science fiction, ensuring the safety of the human project should be a vital responsibility for all of us today.
What are the biggest risks? Naturally occurring hazards aren’t trivial. They include supervolcanoes such as the one that formed Yellowstone National Park. An asteroid, of the kind that wiped out the dinosaurs 65 million years ago. Naturally occurring pandemics such as the Black Death. Such hazards are real, and merit an appropriate response. You’ll be pleased to know that NASA's Planetary Defense Coordination Office is tracking near-earth objects, and a year ago successfully affected the course of a nearby asteroid.
But the biggest dangers are the ones that our technologies have wrought. Unexpected climate change feedback loops – such as the melting of the Greenland and Antarctic ice sheets – could lead to long-term temperature rises of 6 degrees or more. Nuclear missiles kept on hair trigger alert might lead to a miscalculation that ends up in a large-scale nuclear conflict. The misuse of genetic technologies could see terrorists produce a bug that spreads as quickly as measles, but is far more deadly. When computers become smarter than humans, we need to ensure that the first superintelligence doesn’t regard humanity the same way that most of us see the world’s insects.
Underlying all of this is the rise of populism: the philosophy that politics is a conflict between the pure mass of people and a vile elite. Since 1990, the number of populist leaders holding office worldwide has quintupled. Most are right-wing populists, who demonise intellectuals, immigrants and the international order.
As COVID-19 demonstrated, populists’ angry approach to politics, scorn towards experts and disdain for institutions made the pandemic much worse. The same goes for other catastrophic risks. If major nations withdrawn from international health bodies and climate agreements, the danger rises. Forging an international agreement on artificial intelligence safety will likely prove impossible if the populists run the show.
The catastrophic risks that threaten our species have been the focus of so many movies that you could run a disaster film festival. We’ve seen movies featuring natural pandemics (Outbreak, Carri- ers, and Contagion), bioterrorism (12 Monkeys, V for Vendetta, and 28 Days Later), asteroid strikes (Deep Impact, Armageddon, and Judgment Day), nuclear war (Dr. Strangelove, On the Beach, and The Day After),artificial intelligence (Avengers: The Age of Ultron, The Matrix, and Terminator), and climate change (Waterworld, Mad Max: Fury Road, and Blade Runner 2049).
These dangers have had us on the edge of our movie seats, but they haven’t gotten most people off the couch to act. Yet the answers aren’t hard to find. For each existential peril, there’s a handful of sensible solutions. For example, to reduce the threat of bioterrorism, we should improve the security of DNA synthesis. To tackle climate change, we need to cut carbon emissions and assist developing nations to follow a low-emissions path. To lower the chance of atomic catastrophe, we should take missiles off hair-trigger alert and adopt a universal principle of no first use. To improve the odds that a superintelligent computer will serve humanity’s goals, research teams should adopt programming principles that mandate advanced computers to be observant, humble, and altruistic.
Beyond this, everyone who cares passionately about the future of humanity should view populism as a cross-cutting danger, and consider how to stem its rise. This means sustaining well-paid jobs in communities hit by technological change. Ensuring that the education system is accessible to everyone, not just the fortunate few. And reforming democracy so that electoral outcomes represent the popular will. Instead of angry populism, the cardinal Stoic virtues – courage, prudence, justice and moderation – can guide a more principled politics, and ultimately shape a better world.