Source:, November 2019

Response: In this technology age, with the explosion of interest and applications using Artificial Intelligence, it is easy to accept the output of a technology-based test – such as a smartphone app designed to identify skin cancer – without thinking too much about it. In reality, technology is only as good as the way it has been developed, tested and validated. In particular, AI algorithms are prone to a lack of “generalisation” – i.e. their performance drops when presented with data it has not seen before. In the medical field, and particularly in areas where AI is being developed to direct a patient’s diagnosis or care, this is particularly problematic. Inappropriate diagnosis or advice to patients can lead to false reassurance, heightened concern and pressure on NHS services, or worse. It is concerning, therefore, that there are a large number of smartphone apps available that provide an assessment of skin lesions, including some that provide an estimate of the probability of malignancy, that have not been assessed for diagnostic accuracy.

Skin Analytics has developed an AI-based algorithm, named: Deep Ensemble for Recognition of Malignancy (DERM), for use as a decision support tool for healthcare providers. DERM determines the likelihood of skin cancer from dermoscopic images of skin lesions. It was developed using deep learning techniques that identify and assess features of these lesions which are associated with melanoma, using over 7,000 archived dermoscopic images. Using these images, it was shown to identify melanoma with similar accuracy to specialist physicians. However, to prove the algorithm could be used in a real life clinical setting, Skin Analytics set out to conduct a clinical validation study. What are the main findings?

Read the original full article