The debate over extremely rare side effects from the Johnson & Johnson COVID-19 vaccine highlights questions about how we should account for the unintended consequences of new technology.
Why it matters: AI algorithms, facial recognition, self-driving cars — the future will be full of technologies that will bring new benefits shadowed by new harms. But unlike in medicine, there isn't a clear framework for how to strike a balance between the two.
- EU officials framed the new rules as part of a "risk-based approach" to regulating new technology that would seek to balance civil rights against the need to promote innovation.
- The EU's draft rules came two days after the Federal Trade Commission warned it might intervene against companies using what it identifies as biased AI.
The big picture: The moves come amid what the New York Times called on Monday a "global tipping point for reining in tech," as governments around the world move to "limit the power of tech companies with an urgency and breadth that no single industry had experienced before."
Yes, but: The proposed regulations come only after the tech industry and many of its most controversial products have already ensconced themselves in the global economy and daily life — a marked contrast to how we regulate many medical interventions like the COVID-19 vaccine.
- It took just a few days to develop the first mRNA vaccines against COVID-19, but nearly a full year of clinical trials proving their effectiveness and safety was required before the first shots could be deployed.
- As the pause of the J&J vaccine demonstrates, even just a handful of reported cases of possible side effects is enough to temporarily halt the distribution of a vaccine out of what the FDA's Peter Marks called an "abundance of caution."
Be smart: For better or for worse, an "abundance of caution" is not the attitude regulators or the public have generally taken toward the innovations of the broader tech industry.
- Federal regulators are investigating the fatal crash of a Tesla vehicle over the weekend in Texas that had no one behind the wheel, one of numerous recent accidents in which drivers were or may have been using the company's quasi-self-driving Autopilot feature.
- Tesla CEO Elon Musk claimed on Twitter that data logs showed that Autopilot hadn't been enabled in the Texas accident.
- Last week Twitter announced it would examine the machine learning algorithms that help determine its feed for "harmful side effects" like gender or racial bias — a move that comes 15 years after the social network was founded.
- Even in medicine, with its Hippocratic Oath of "do no harm," doctors are raising concerns that AI tools are being rolled out to hospitals with too little proof of their effectiveness and too little oversight of their potential harms.
The big question: What would the world look like if we moved as carefully with all innovations as we do with something like vaccines?
- Safer, perhaps, as we'd examine the potential harms that could come with new inventions like social media before they wired the entire world.
- But it would also likely be far less innovative, and as the debate over the J&J vaccine demonstrates, an abundance of caution can leave us exposed to less direct harms as we lose out on new advances that might promote growth or protect us from unanticipated threats.
- A world where social media had to go through the equivalent of an institutional review board might be one in which Darnella Frazier might never have been able to post the video of George Floyd's killing — and Derek Chauvin's conviction might never have happened.
What's next: The stakes of how we handle the side effects of new technologies will only grow in the future.
- Martin Rees, the U.K.'s Astronomer Royal and an existential risk expert, warned this week that "ur globally-linked society is vulnerable to the unintended consequences of powerful new technologies — not only nuclear, but (even more) biotech, cyber, advanced AI, space technology."
- At the same time, Rees acknowledged to me, "We depend hugely on the benefits of these technologies as well," which puts us in the uncomfortable position of trying to thread a needle of innovation where both too little and too much regulation could be disastrous.
The bottom line: This dilemma willdefine our future.