The diagnostic accuracy of an artificial intelligence algorithm in recognizing melanomas in camera-based dermoscopic images of suspicious pigmented skin lesions was similar to that made by clinician specialists, study results published in JAMA Network Open have shown.1
In 2016, the United States Preventive Services Task Force (USPSTF) recommended against widespread clinician screening for skin cancer by visual inspection due to insufficient evidence regarding the benefits and risks of such an approach.2 More recently, a series of Cochrane reviews of diagnostic methods used in the evaluation of skin lesions, including visual assessment with or without skin service microscopy (ie, dermoscopy), reflectance conformal microscopy, teledermatology, and computer-based or smartphone applications, found only limited data for the latter 3 approaches. Other potential barriers to widespread melanoma screening include the expense of reflectance confocal microscopic equipment, and varying clinician experience.1
In this prospective, multicenter, single-arm, masked study, an artificial intelligence algorithm called Deep Ensemble for Recognition of Malignancy was used to categorize photographs of suspicious pigmented skin lesions and control lesions taken using 2 types of smartphones and a digital camera, all of which had a dermoscopic attachment. These results were compared with the assessments by clinician specialists.
Of the 514 patients enrolled in the study, nearly all were white. The number of skin lesion images included 551 and 999 biopsied and control images, respectively, for a total of 1550 images. Pathologic analyses were performed on specimens of biopsied lesions. In the case of the biopsied lesions, 22.7% were classified as melanoma.
The artificial intelligence algorithm, previously trained using published dermatoscopic images, was further refined for each of the 3 cameras using a subset of the images collected from each camera as part of this study, although no images from the same patients were used in the training and data sets. All lesions included in the image data set were photographed by all 3 digital cameras.
The algorithm rated each image on a scale of 0 (certainly benign) to 1 (certainly malignant), whereas clinicians used a scale of 1 (unlikely to be melanoma) to 4 (highly likely to be melanoma). Area under the receiver operating characteristic curve (AUROC) was used as 1 of the indices of accuracy for melanoma assessment based on a comparison with results for biopsied lesions and all lesions.
This article originally appeared on Oncology Nurse Advisor