Featured

How to stop the end of civilisation

Since 1944, humanity has been discovering mortal perils at a troubling rate. The Second World War, of course, was ended partly by the first offensive use of the nuclear bomb. Following the war, the US and the USSR amassed vast stockpiles of ever-more powerful warheads, with the Soviets also embarking on a massive biological weapons programme.

That programme, fortunately, did not survive the death of the Soviet regime. I once spoke to the buccaneering American diplomat who, after the collapse of the USSR, was the first Westerner to discover the regime’s enormous vats of anthrax. ‘It really was an evil empire,’ he told me, still troubled by what he had seen.

Yet still the perils became more numerous. In the 21st century, gene editing technology has allowed scientists to create more virulent or deadly versions of existing pathogens, even if only to study them. The danger is far from hypothetical. You need not be a virologist to reasonably suspect that the Covid-19 pandemic emerged from the Wuhan laboratory that studied coronaviruses.

But Covid is far from the worst of the dangers that might emerge, accidentally or otherwise, from the dozens of laboratories that work on the most dangerous pathogens. Last December, a group of scientists disclosed to the public a theory deemed so dangerous it had been kept strictly secret for years. ‘Mirror life’ is what these researchers call it: pathogens that, through skilful genetic engineering, are structurally flipped, making even the common cold, at least theoretically, a deadly threat to immune systems familiar only with the pre-mirrored version.

It doesn’t help that AI could, if it continues to advance in capability, be used to accelerate scientific research. GPT-5 is nowhere near able to rustle up a virus, but perhaps future AI models might be more dangerous. A powerful AI could control swarms of drones, take down power grids, or even, some researchers have warned, gain control of nuclear weapons. This AI might be following the orders of a human, or might, instead, be following the logic of any living creature that, having goals of some sort, thinks its activity threatened by other life-forms. If it can, it will sideline those other life-forms – or even wipe them out. Such are the darkest fears of two of the three so-called ‘godfathers of AI’. 

When considering perils such as those above, researchers like to categorise them as catastrophic risks (things that might kill lots of us) or existential risks (things that might kill all of us.) Since 2022, I have spent what might well be an unhealthy amount of time studying these perils. A striking characteristic of catastrophic risks and existential risks alike is that they often seem to be the flipside of phenomena that have brought us great benefits. Asteroids, it is theorised, seeded the ancient Earth with some of the materials required for life; the planet’s magmatic interior brings us heat and minerals as well as eruptions.

A similar principle is true of the perils invented by humans. Our ability to split the atom has brought with it the world’s best source of reliable, clean energy; it is only our foolishness that has restricted its use. Likewise, genetically-altered pathogens are the price of our burgeoning ability to rewrite DNA – with all the attendant benefits to medicine and human wellbeing.

The same, of course, is true of AI. It is intelligence, more than any other quality, that has enabled our species to so aggressively improve its lot over the past few centuries. It took billions of years for evolution, constrained by all manner of biological bottlenecks, to produce the human brain; we might now be on the brink of developing intelligence that surpasses human performance across multiple domains for long periods at a time. 

The researchers the most concerned about the risks posed by AI are often those who are also the most excited by its potential payoffs. GPT-5 appears not to be the final step before runaway superintelligence; perhaps the investors who have piled into the existing AI paradigm will lose their shirts. But the world has now awoken to the possibilities of cognition superior to our own.

Elsewhere in my work, I explore the means by which Britain might reify Anglofuturism: a version of our country in which our decline is reversed. I’ve come to understand work on existential risk as being a prerequisite of reifying any ambitious futuristic vision. We need technology if we’re going to raise ourselves further from the mud. But we also need to ensure that this technology doesn’t backfire. We can think of these endeavours as a form of civilisational insurance. We don’t drive without having car insurance; nor should we work on dangerous viruses without doing what we can to prevent their spread. 

According to some proposals, we could insure ourselves even in the narrow accounting sense: not by enacting a dead man’s payout in the event we extinguish ourselves – glad though the surviving cockroaches might be of a windfall – but by forcing developers to pay out in the event that their AI models cause near-term harm. That idea was objected to by developers, and narrowly failed to make it into Californian law last year, but it is suggestive of the kind of ingenuity – legislative and canny – that must supplement technical innovation. Those mortal perils are coming fast, and there’s no guarantee that we’ve discovered them all.

‘The Anti-Catastrophe League’ by Tom Ough is published by HarperCollins.

 – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Tom Ough is senior editor at UnHerd, the author of ‘The Anti-Catastrophe League’ and the co-host of the Anglofuturism podcast.

Columns are the author’s own opinion and do not necessarily reflect the views of CapX.



Source link

Related Posts

1 of 73