05 August 2020
75 years after Hiroshima and Nagasaki, some experts believe the risk of the use of a nuclear weapon is as high now as it has been since the Cuban missile crisis.
The big picture: Nuclear war remains the single greatest present threat to humanity — and one that is poised to grow as emerging technologies, like much faster missiles, cyber warfare and artificial intelligence, upset an already precarious nuclear balance.
What's happening: A mix of shifting geopolitical tensions and technological change is upsetting a decades-long state of strategic stability around nuclear weapons.
- Strategic stability is when no country has an incentive to launch a first nuclear strike, knowing that doing so would inevitably lead to a catastrophic response. It's the "mutual" in "mutually assured destruction."
- Arms control deals like the Intermediate-Range Nuclear Forces Treaty are collapsing, while faster hypersonic missiles are shrinking the already brief minutes available to decide how and whether to respond to a potential nuclear attack, meaning "the possibilities of a miscalculation are unfortunately higher than they have been in a long, long time," says former Energy Secretary Ernest Moniz.
- As concerning as rising tensions are between the U.S. and Russia, or between the U.S. and a more assertive China, experts worry even more about the destabilizing effect of emerging technologies like cyber warfare and AI.
- "The black box of AI in the future of war makes it almost inherently unpredictable," says P.W. Singer, a strategist at New America and author of "Burn-In" — and unpredictability is anathema to a nuclear balance held in place by predictability.
Cyber warfare can directly increase the risk of nuclear conflict if it is used to disrupt command and control systems.
- But a greater danger may come from cyber conflict playing out in a space occupied by both military and civilian users, which risks eroding the bright line between nuclear and conventional war.
- The U.S. and Israel are widely understood to have targeted Iran's nuclear program with the Stuxnet computer worm, hackers in Russia have probed U.S. nuclear plants and Chinese hackers have conducted cyber espionage on the U.S. military.
- U.S. policymakers have discussed whether to threaten a nuclear response to a wide-scale cyberattack on power infrastructure, which may serve as a deterrent, but also opens up a new and unpredictable escalation pathway to nuclear conflict.
AI is only in its infancy, but depending on how it develops, it could utterly disrupt the nuclear balance.
- Even in its nascent stages, AI is likely to make offensive cyberhacking of all kinds more effective, increasing the risk that a cyberconflict could turn nuclear.
- AI may eventually help war planners more effectively target an enemy's nuclear weapons. That would make an opponent more vulnerable — and potentially more willing to use nuclear weapons first out of a fear they might lose them.
- As Singer notes, "just like any human, AI can suffer from various biases" — especially since there is no real-world nuclear war data to train it on.
- But unlike a human, the smarter AI gets, the harder it is for humans to understand how it works, and whether it's making a mistake in a realm where there is no room for mistakes.
Be smart: As analysts from RAND wrote in a 2018 report, "AI may be strategically destabilizing not because it works too well but because it works just well enough to feed uncertainty." Whether or not an AI system could provide a decisive advantage in a nuclear standoff, if either the system's user or that country's opponent believes it can do so, the result could be catastrophic.
- Yes, but: Singer also offers a more hopeful scenario where effective AI could reduce the risk of human miscalculation by "offering far greater information in scale and detail than was possible in the past."
The bottom line: The riskiest period of the Cold War was its earliest stages, when military and political leaders didn't yet fully understand the nature of what Hiroshima had demonstrated. Emerging technologies like AI threaten to plunge us back into that uncertainty.
Transcripts show George Floyd told police "I can't breathe" over 20 times
Section2Newly released transcripts of bodycam footage from the Minneapolis Police Department show that George Floyd told officers he could not breathe more than 20 times in the moments leading up to his death.
Why it matters: Floyd's killing sparked a national wave of Black Lives Matter protests and an ongoing reckoning over systemic racism in the United States. The transcripts "offer one the most thorough and dramatic accounts" before Floyd's death, The New York Times writes.
The state of play: The transcripts were released as former officer Thomas Lane seeks to have the charges that he aided in Floyd's death thrown out in court, per the Times. He is one of four officers who have been charged.
- The filings also include a 60-page transcript of an interview with Lane. He said he "felt maybe that something was going on" when asked if he believed that Floyd was having a medical emergency at the time.
What the transcripts say:
- Floyd told the officers he was claustrophobic as they tried to get him into the squad car.
- The transcripts also show Floyd saying, "Momma, I love you. Tell my kids I love them. I'm dead."
- Former officer Derek Chauvin, who had his knee on Floyd's neck for over eight minutes, told Floyd, "Then stop talking, stop yelling, it takes a heck of a lot of oxygen to talk."
Read the transcripts via DocumentCloud.