Stanford University’s researchers have announced that they are training deep learning algorithms to diagnose skin cancer.
The use of technology in the medical industry has started to fascinate me and, in particular, this recent case study from The States just shows how tech/MedTech can truly revolutionise not the only the industry itself but save lives.
I don’t mean to scare you – but just think of making a doctor’s appointment just to find out if a mole that you have discovered could be potentially cancerous. Also, imagine if you lived miles away from the nearest doctor’s surgery and couldn’t take any time off from your job to see a specialist. A truly frightening scenario. Thanks to smartphone technology, getting a diagnosis via a scan could be lifesaving.
Researchers at Stanford University in California created an artificially intelligent diagnosis algorithm for skin cancer. They made a database of nearly 130,000 skin disease images and trained their algorithm to visually diagnose potential cancer. From the very first test phase, it performed with great accuracy.
Stanford graduate student Andre Esteva mentioned on the university website that they made a very powerful machine learning algorithm that learns from data, he explained that Instead of writing into computer code exactly what to look for, you let the algorithm do the work.
So how was the algorithm trained, exactly? In a nutshell, it was fed each image as raw pixels with an associated disease label. Instead of building an entire algorithm from scratch, the researchers had an algorithm built by Google which can identify 1.28 million images.
However, there was a limited dataset for skin cancer, so the researchers had to create their own. They teamed up with the medical school which was a challenging and arduous task as many of the labels were in numerous languages.
After the long process of accumulating around 130,000 images of skin lesions which represented over 2,000 different diseases, the algorithm was tested.
The algorithm’s performance was measured through the creation of a sensitivity-specificity curve, where sensitivity represented its ability to correctly identify malignant lesions and specificity represented its ability to correctly identify benign lesions. It was assessed through three key diagnostic tasks: keratinocyte carcinoma classification, melanoma classification, and melanoma classification when viewed using dermoscopy.
In all of the tasks, the algorithm matched the performance of the dermatologists with the area under the sensitivity-specificity curve amounting to at least 91 percent of the total area of the graph. (Stanford University, January 2017)
In the future, the Stanford researchers are seeking to launch the algorithms via smartphones. Currently, they only have the algorithm on a computer. Having it available on a smartphone device would mean that users would have a reliable skin cancer diagnoses from a simple swipe and scan.
The Stanford team believe it will be relatively easy to transition the algorithm to mobile but there still needs to be further tweaking and testing.
Despite the challenges ahead, the researchers at Stanford University are hopeful that deep learning could one day play a very important role when it comes to visual diagnosis in a number of medical fields.