For years, physicians have had to rely on visual inspection to identify suspicious pigmented lesions (SPLs), which can be an indication of skin cancer. But quickly finding and prioritizing SPLs is difficult, due to the high volume of pigmented lesions that often need to be evaluated for potential biopsies.

However, researchers from MIT and elsewhere have now devised a new artificial intelligence pipeline, using deep convolutional neural networks (DCNNs), applying them to analyzing SPLs through the use of wide-field photography, common in most smartphones and personal cameras.

Using cameras to take wide-field photographs of large areas of patients’ bodies, the program uses DCNNs to quickly and effectively identify and screen for early-stage melanoma, according to Luis R. Soenksen, a postdoc and a medical device expert currently acting as MIT’s first Venture Builder in Artificial Intelligence and Healthcare.

A wide-field image, acquired with a smartphone camera, shows large skin sections from a patient in a primary-care setting. An automated system detects, extracts, and analyzes all pigmented skin lesions observable in the wide-field image. A pre-trained deep convolutional neural network (DCNN) determines the suspiciousness of individual pigmented lesions and marks them (yellow = consider further inspection, red = requires further inspection or referral to dermatologist). Extracted features are used to further assess pigmented lesions and to display results in a heatmap format.

Soenksen, who was the first author of the recent paper, “Using Deep Learning for Dermatologist-level Detection of Suspicious Pigmented Skin Lesions from Wide-field Images,” published in Science Translational Medicine, explains that; “Early detection of SPLs can save lives; however, the current capacity of medical systems to provide comprehensive skin screenings at scale are still lacking.”

The paper describes the development of an SPL analysis system using DCNNs to more quickly and efficiently identify skin lesions that require more investigation, screenings that can be done during routine primary care visits, or even by the patients themselves. The system utilized DCNNs to optimize the identification and classification of SPLs in wide-field images.

Using AI, the researchers trained the system using 20,388 wide-field images from 133 patients at the Hospital Gregorio Marañón in Madrid, as well as publicly available images. The images were taken with a variety of ordinary cameras that are readily available to consumers. Dermatologists working with the researchers visually classified the lesions in the images for comparison. They found that the system achieved more than 90.3% sensitivity in distinguishing SPLs from non-suspicious lesions, skin, and complex backgrounds, by avoiding the need for cumbersome and time-consuming individual lesion imaging.

Additionally, the paper presents a new method to extract intra-patient lesion saliency (ugly duckling criteria, or the comparison of the lesions on the skin of one individual that stand out from the rest) on the basis of DCNN features from detected lesions.

“Our research suggests that systems leveraging computer vision and deep neural networks, quantifying such common signs, can achieve comparable accuracy to expert dermatologists,” Soenksen explains. “We hope our research revitalizes the desire to deliver more efficient dermatological screenings in primary care settings to drive adequate referrals.” Doing so would allow for more rapid and accurate assessments of SPLS and could lead to earlier treatment of melanoma, according to the researchers.

Senior author of the paper, Martha J. Gray, explains how the important project developed: “This work originated as a new project developed by fellows (five of the co-authors) in the MIT Catalyst program, a program designed to nucleate projects that solve pressing clinical needs. This work exemplifies the vision of HST/IMES devotee (in which tradition Catalyst was founded) of leveraging science to advance human health.”

The work was supported by Abdul Latif Jameel Clinic for Machine Learning in Health and by the Consejería de Educación, Juventud y Deportes de la Comunidad de Madrid through the Madrid-MIT M+Visión Consortium.