05 June 2021
A future that will see escalating danger from extreme risks demands a longer-term approach to handling these threats.
The big picture: The world was caught off guard by COVID-19, and millions of people have paid the price. But the pandemic provides an opportunity to rethink theapproach to the growing threat from low-probability but high-consequence risks — including the ones we may be inadvertently causing ourselves.
Driving the news: Earlier this week, a nonprofit in the U.K. called the Centre for Long-Term Resilience put out a report that should be required reading for leaders around the world.
- Spearheaded by Toby Ord — an existential risk scholar at the University of Oxford — "Future Proof" makes the case that "we are currently living with an unsustainably high level of extreme risk."
- "With the continued acceleration of technology, and without serious efforts to boost our resilience to these risks, there is strong reason to believe the risks will only continue to grow," as the authors write.
Between the lines: "Future Proof" focuses on two chief areas of concern: artificial intelligence and biosecurity.
- While the longer-term threat of artificial intelligence reaching a level of superintelligence beyond humans is an existential risk by itself, the ransomware and other cyberattacks plaguing the world could be supercharged by the use of AI tools, while the development of lethal autonomous weapons threatens to make war far more chaotic and destructive.
- Natural pandemics are bad enough, but we're headed toward a world in which thousands of people will have access to technologies that can enhance existing viruses or synthesize entirely new ones. That's far more dangerous.
It's far from clear how the world can control these human-made extreme risks.
- Nuclear weapons are easy by comparison — bombs are difficult to make and even harder for a nation to use without guaranteeing its own destruction, which is largely why, 75 years after Hiroshima, fewer than 10 countries have developed a nuclear arsenal.
- But both biotech and AI are dual-use technologies, meaning they can be wielded for both beneficial and malign purposes. That makes them far more difficult to control than nuclear weapons, especially since some of the most extreme risks — like, say, a dangerous virus leaking out of a lab — could be accidental, not purposeful.
- Even though the risks from biotech and AI are growing, there is little in the way of international agreements to manage them. The UN office charged with implementing the treaty banning bioweapons is staffed by all of three people, while efforts to establish global norms around AI research — much of which, unlike the nuclear sphere, is carried out by private firms — have been mostly unsuccessful.
What to watch: The "Future Proof" report recommends a range of actions, from focusing on the development of technologies like metagenomic sequencing that can rapidly identify new pathogens to having nations set aside a percentage of GDP for extreme risk preparation, just as NATO members are required to spend on defense.
- A global treaty on risks to the future of humanity, modeled on earlier efforts around nuclear weapons and climate change, could at least raise the international profile of extreme risks.
- Most importantly, the report calls for the creation of "chief risk officers" — officials empowered to examine government policy with an eye toward what could go very wrong.
The bottom line: We are entering a frightening time for humanity. Ord estimates the chance that we will experience an existential catastrophe over the next 100 years is 1 in 6, the equivalent of playing Russian roulette with our future.
- But if our actions have put the bullet in that gun, it's also in our power to take it out.
Transcripts show George Floyd told police "I can't breathe" over 20 times
Section2Newly released transcripts of bodycam footage from the Minneapolis Police Department show that George Floyd told officers he could not breathe more than 20 times in the moments leading up to his death.
Why it matters: Floyd's killing sparked a national wave of Black Lives Matter protests and an ongoing reckoning over systemic racism in the United States. The transcripts "offer one the most thorough and dramatic accounts" before Floyd's death, The New York Times writes.
The state of play: The transcripts were released as former officer Thomas Lane seeks to have the charges that he aided in Floyd's death thrown out in court, per the Times. He is one of four officers who have been charged.
- The filings also include a 60-page transcript of an interview with Lane. He said he "felt maybe that something was going on" when asked if he believed that Floyd was having a medical emergency at the time.
What the transcripts say:
- Floyd told the officers he was claustrophobic as they tried to get him into the squad car.
- The transcripts also show Floyd saying, "Momma, I love you. Tell my kids I love them. I'm dead."
- Former officer Derek Chauvin, who had his knee on Floyd's neck for over eight minutes, told Floyd, "Then stop talking, stop yelling, it takes a heck of a lot of oxygen to talk."
Read the transcripts via DocumentCloud.