A broad new report from the World Health Organization (WHO) lays out ethical principles for the use of artificial intelligence in medicine.
Why it matters: Health is one of the most promising areas of expansion for AI, and the pandemic only accelerated the adoption of machine learning tools. But adding algorithms to health care will require that AI can follow the most basic rule of human medicine: "Do no harm" — and that won't be simple.
Driving the news: Afternearly two years of consultations by international experts, the WHO report makes the case that the use of AI in medicine offers great promise for both rich and poorer countries, but "only if ethics and human rights are put at the heart of its design, deployment and use," the authors write.
- AI is already being used in medicine to detect tumors in radiological scans, predict how outbreaks will unfold and analyze doctors' case notes and patient conversations.
- In the future, it could help speed the process of drug discovery, give real-time diagnosis from better health wearables and even act as "virtual nurses" to remote patients.
Between the lines: The power of AI in health care is also its peril — the ability to rapidly process vast quantities of data and identify meaningful and actionable patterns far faster than human experts could.
- When it works, AI holds the promise of helping human clinicians provide better and cheaper care — as in a project that uses AI to rapidly scan for cervical cancer in under-resourced parts of Africa and India.
- But if something goes wrong, a mistake in a single algorithm risks doing far more widespread harm than any single doctor might do. In a recent study, an algorithm used to identify cases of sepsis was found to miss two-thirds of cases while frequently issuing false alarms.
The big picture: To get the most out of AI in medicine while minimizing harm, the WHO report lays out a kind of "Hippocratic Oath" for artificial practitioners of the medical arts.
- The principles include that humans — both clinicians and patients — remain the ultimate decision-makers in medicine, AI in health primarily "does no harm" and any recommendations or actions by AI remain transparent and explainable.
- AI technologies should be clearly accountable for patient outcomes, engineered to be usable to the widest possible population and designed to ensure they actually work in real-world conditions — not just in trials.
The catch: Not unlike the modern Hippocratic Oath — all of 340 words — outlining the principles of responsible AI use in health is a lot easier than putting them into practice.
- The more complex that algorithms become, the more difficult it can be to make their actions explainable to laypeople, while AI models remain dogged by deeply embedded bias too often informed by bad data.
The bottom line: Few professional relationships require more trust than that between a clinician and their patient, and medical AI still needs to earn that trust.