14 November 2020
For all our fears about Terminator-style killer robots, the aim of AI in the U.S. military is likely to be on augmenting humans, not replacing them.
Why it matters: AI has been described as the "third revolution" in warfare, after gunpowder and nuclear weapons. But every revolution carries risks, and even an AI strategy that focuses on assisting human warfighters will carry enormous operational and ethical challenges.
Driving the news: On Tuesday, Armenia accepted a cease-fire with its neighbor Azerbaijan to bring a hopeful end to their brief war over the disputed enclave of Nagorno-Karabakh.
- Azerbaijan dominated the conflict in part thanks to the ability of its fleets of cheap, armed drones to destroy Armenia's tanks, in what military analyst Malcolm Davis called a "potential game-changer for land warfare."
An even bigger game-changer would be if such armed drones were made fully autonomous, but for the foreseeable future such fears of "slaughterbots" that could be used to kill with impunity appear overstated, says Michael Horowitz, a political scientist at the University of Pennsylvania.
- "The overwhelming majority of military investments in AI will not be about lethal autonomous weapons, and indeed none of them may be," says Horowitz.
- A report released last month by Georgetown's Center for Security and Emerging Technology found defense research into AI is focused "not on displacing humans but assisting them in ways that adapt to how humans think and process information," said Margarita Konaev, the report's co-author, at an event earlier this week.
Details: A version of that future was on display at an event held in September by the Air Force to demonstrate its Advanced Battle Management System (ABMS),which can rapidly process data in battle and use it to guide warfighters in the field.
- Even though they have extremely expensive hardware at their fingertips, servicemen and -women in a firefight mostly transmit information manually, often through chains of radio transmissions. But ABMS aims to use cloud computing and machine learning to speed up that process, augmenting the abilities of each warfighter.
- At the September demo, Anduril — a young Silicon Valley startup backed by Peter Thiel and co-founded by Palmer Luckey that focuses on defense — showed off its Lattice software system, which processes sensor data through machine-learning algorithms to automatically identify and track targets likean incoming cruise missile.
- Using the company's virtual reality interface, an airman in the demo only had to designate the target as hostile and pair it with a weapons system to destroy it, closing what the military calls a "kill chain."
What they're saying: "At the core, our view is that the military has struggled with the question of, how do I know what’s happening in the world and how do we process it," says Brian Schimpf, Anduril's CEO.
- What Anduril and other companies involved in the sector are aiming to do is make AI work for defense in much the same way it currently works for other industries: speeding up information processing and creating what amounts to a more effective, human-machine hybrid workforce.
Yes, but: Even though people still decide whether or not to pull the trigger, experts worry about the accuracy of the algorithms that are advising that decision.
- "If like Clausewitz you believe in the fog of war, how could you ever have all the data that would actually allow you to simulate what the battlefield environment looks like in a way that would give you confidence to use the algorithm?" says Horowitz.
- Just as it's not fully clear who would be responsible for an accident involving a mostly self-driving car — the human inside or the technology — "who owns the consequences if something goes wrong on the battlefield?" asks P.W. Singer, a senior fellow at New America.
Be smart: The strength of AI is also its vulnerability: speed.
- It's bad enough when malfunctioning trading algorithms cause a stock market flash crash. But if faulty AI systems encourage the military to move too quickly on the battlefield, the result could be civilian casualties, an international incident — or even a war.
- At the same time, the Armenia-Azerbaijan war underscores the fact that warfare never stands still, and rivals like China and Russia are moving ahead with their own AI-enabled defense systems.
The bottom line: Two questions should always be asked whenever AI spreads to a new industry: Does it work and should it work? In war, the stakes of those questions can't get any higher.
Transcripts show George Floyd told police "I can't breathe" over 20 times
Section2Newly released transcripts of bodycam footage from the Minneapolis Police Department show that George Floyd told officers he could not breathe more than 20 times in the moments leading up to his death.
Why it matters: Floyd's killing sparked a national wave of Black Lives Matter protests and an ongoing reckoning over systemic racism in the United States. The transcripts "offer one the most thorough and dramatic accounts" before Floyd's death, The New York Times writes.
The state of play: The transcripts were released as former officer Thomas Lane seeks to have the charges that he aided in Floyd's death thrown out in court, per the Times. He is one of four officers who have been charged.
- The filings also include a 60-page transcript of an interview with Lane. He said he "felt maybe that something was going on" when asked if he believed that Floyd was having a medical emergency at the time.
What the transcripts say:
- Floyd told the officers he was claustrophobic as they tried to get him into the squad car.
- The transcripts also show Floyd saying, "Momma, I love you. Tell my kids I love them. I'm dead."
- Former officer Derek Chauvin, who had his knee on Floyd's neck for over eight minutes, told Floyd, "Then stop talking, stop yelling, it takes a heck of a lot of oxygen to talk."
Read the transcripts via DocumentCloud.