This study comparing the diagnostic accuracy of state of the art machine learning algorithms; with human readers for all clinically relevant types of benign and malignant pigmented skin lesions; and coming to the diagnosis of pigmented skin lesions, artificial intelligence is superior to humans.
However by diagnosing the dermatoscopic images selecting randomly in 30 image of batches from a test set of 1511 images. But the diagnoses from human readers comparing with those of 139 algorithms creating by 77 machine learning labs; who participating in the International Skin Imaging Collaboration 2018 challenge and by receiving a training set of 10 015 images in advance.
Malignant pigmented skin
But by using the database including benign (moles, sun spots, senile warts, angiomas and dermatofibromas) and malignant pigmented lesions; and some predefined disease categories like intra epithelial carcinoma including actinic keratoses and Bowen’s disease; basal cell carcinoma; benign keratinocytic lesions including solar lentigo, seborrheic keratosis and lichen planus like keratosis; dermatofibroma; melanoma; melanocytic nevus; and vascular lesions.
But the research involving each participant to be diagnose 30 randomly selecting by images out of a test-set of 1511 images. The result was unequivocal. While the best humans diagnosed 18.8 out of 30 cases correctly, the best machines achieved 25.4 correct diagnoses.
Data providing by usually examining the entire patient and not just single lesions. When humans make a diagnosis they also take additional information into account, such as the duration of the disease; whether the patient is at high or low risk, and the age of the patient.
However, the difference between human experts and the top three algorithms is significantly lower for images in the test set that were collecting from sources not including in the training set (human under performance of 11·4%, 95% CI 9·9–12·9 vs 3·6%, 0·8–6·3; p<0·0001).The machines is significantly less accurate in the diagnosis of lesions that came from centers that did not provide training images.