
Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.
With impeccable timing in the aftermath of the Facebook and Cambridge Analytica data debacle, we saw the enactment of the EU’s General Data Protection Regulation (GDPR) last month with profound international repercussions as it applied to any data that belongs to an EU citizen.
In addition, Google employees signed (and resigned) a petition against Project Maven, which had possible implications in using AI for lethal purposes; this resulted in Google not renewing a contract to continue its collaboration with the Pentagon.
As these AI-centric ethical issues intertwine, Theresa May, the United Kingdom Prime Minister, announced the country’s focus on applying AI to fight cancer and she estimated that over 22,000 lives can be saved by 2033.
The advent of computer vision and its use in medical images has created a juggernaut in medicine, and in particular, radiology
Issue 03 of AIMed Magazine focuses on the white-hot domain of medical imaging in the context of deep learning convolutional neural networks (CNN) and artificial intelligence.
The advent of computer vision and its use in medical images has created a juggernaut in medicine, and in particular, radiology. Much has already been accomplished in the area of chest X-ray as well as MRI/CT image interpretation and some radiologists are understandably feeling threatened or at least distracted by this technological overlord (perhaps reminiscent of the human Jeopardy! show champions facing the IBM Watson supercomputer in 2011).
A Stanford-based paper on an algorithm called CheXNet demonstrated that a 121-layer CNN (trained on the largest publicly available chest X-ray dataset of over 100,000 chest X-rays) was superior to the average radiologist [1].
There were, however, criticisms of this study that would apply to similar studies that directly compare deep learning and human interpretation: accuracy and context of the labels; suboptimal or incomplete reconciliation of medical image and clinical significance; and presence and significance of structured (vs random) noise on performance [2].
Utterly counterproductive and even hubristic is the “man vs machine” mantra promoted by a few AI-advocates who aim to have AI replace clinicians
Both radiologists and data scientists should read those excellent criticisms posted by Dr. Luke Oaken-Rayner, a radiologist who is also a data scientist and therefore looked at both the X-rays as well as the deep learning aspects with a dual perspective that is “convolutional”.
Other subspecialties that have a medical image component can similarly benefit from application of AI and deep learning; these subspecialties include pathology, dermatology, cardiology, and ophthalmology. A recent report in Annals of Oncology of an AI-enabled dermoscopic image interpretation strategy (utilizing Google’s Inception v4 CNN architecture that was trained with over 100,000 dermoscopic images) showed the CNN outperformed a group of 58 human dermatologists by detecting 95% (vs 85-90%) of melanomas [3].
What remains utterly counterproductive and even hubristic is the “man vs machine” mantra promoted by a few AI-advocates who aim to have AI replace clinicians.
As a pediatric cardiologist, I welcome the day that AI can be my intelligent partner to render me even better at interpreting electrocardiograms and even echocardiograms
A real danger in medical image interpretation without input from clinicians is overdiagnosis. If all the machine-derived medical diagnoses were treated without clinical context, the therapy can easily create more harm than good, thus violating the Hippocratic Oath (which none of these AI-zealots who aim to displace physicians needed to take).
In short, AI methodologies (such as deep learning CNN for medical image interpretation) demand a close collaboration, not competition, between the physician and the data scientist. It behooves us to start thinking how we can design studies in which machine and man in a new state of convolution can be even more accurate than either alone.
As a pediatric cardiologist, I welcome the day that AI can be my intelligent partner to render me even better at interpreting electrocardiograms and even echocardiograms. Computer vision and AI can literally provide the physician a new lens to look differently at medical images and perhaps even medicine itself.
REFERENCES:
- Rajpurkar P et al. CheXNet; Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning (https://arxiv.org/abs/1705.02315).
- Oakden-Rayner, L. (https://lukeoakdenrayner.wordpress.com/2017/12/18/the-chestxray14-dataset-problems).
- Haenssle HA et al. Man Against Machine: Diagnostic Performance of a Deep Learning Convolutional Neural Network for Dermoscopic Melanoma Recognition in Comparison to 58 Dematologists. Annals of Oncology,
Dr Anthony Chang, MD, MBA, MPH, MS
Dr Anthony Chang is Chief Intelligence and Innovation Officer
Medical Director, The Sharon Disney Lund
Medical Intelligence and Innovation Institute (MI3)
Children’s Hospital of Orange County