The use of machine learning and AI is growing at a rapid rate in the healthcare sector, being able to tackle tasks many thought wouldn’t be possible just yet.
Google researchers are doing just that by teaching a AI algorithm to detect common forms of blindness, specifically diabetic retinopathy—which affects almost a third of diabetes patients, as effectively as real-life trained ophthalmologist.
What is diabetic retinopathy?
Diabetic retinopathy is a diabetes related complication that affects the eyes, caused by damage to the blood vessels of the light-sensitive tissue at the back of the eye (retina).
At first, diabetic retinopathy may cause no symptoms, or only mild vision problems. However, if left undiagnosed and untreated it can lead to blindness, which is why early screenings are essential in preventing disease progression.
The condition can develop in anyone who has type 1 or type 2 diabetes, but longer you have diabetes and the less controlled your blood sugar is, the more likely you are to develop this eye complication. However, it usually takes several years for diabetic retinopathy to reach a stage where it could become a significant threat to your sight.
You might not have symptoms in the early stages of diabetic retinopathy. However, as the condition progresses, some of the symptoms may include:
- Spots or dark strings floating in your vision (floaters)
- Blurred vision
- Fluctuating vision
- Impaired colour vision
- Dark or empty areas in your vision
- Vision loss
AI and ophthalmology
The use of AI and machine learning will, in the future, enable faster and more accurate diagnosis than has ever been the case before. As such, it will become a vital tool in areas where such expertise is lacking or is not easily accessible.
Machine learning has already made its way into healthcare, with dermatologists being able to diagnose skin cancer using AI. Moreover, a team from Google’s DeepMind who are entirely dedicated to AI have been training machines to be able to identify symptoms of macular degeneration and other eye diseases, by processing optical coherence tomography scans.
This study has been carried out in partnership with top researchers at London’s world-class Moorfields Eye Hospital, where over a million eye scan images will be studied to establish what occurs during the early stages of eye diseases.
Outside the UK, retinal image research has also made its way is in India, where a lack of human ophthalmologists means that many diabetics are not able to access their recommended annual screening for diabetic retinopathy. To help combat this problem, Google has been working with the Aravind Eye Care System, a network of eye hospitals that specialise in helping to reduce the occurrence of blindness caused by cataracts in the country.
Google’s researchers were able to train an eye-scanning algorithm to be able to recognise common eye diseases, on the same level as an expert ophthalmologist. The algorithm uses the same machine learning method which Google uses to label the millions of images in search results.
Automated detection methods can make diagnosis much more efficient and reliable and this is especially true in developing countries where the required human skills are few and far between. Google’s DeepMind algorithm has proven to be highly accurate and is already achieving better diagnosis for diabetic retinopathy than many ophthalmologists themselves. We may also see a similar technology being used to diagnose glaucoma, according to Dr Robert Chang, professor of ophthalmology at Stanford University.
Such breakthroughs in the medical field therefore demonstrate the potential for AI to transform the healthcare sector in the not-so-distant future, with experts claiming that deep learning could soon be applied in other areas of medicine which rely mostly on image analysis for diagnosing problems, such as in cardiology and radiology.
However, many argue that the greatest hurdle would be in making a convincing case for the reliability of such systems, particularly in healthcare, where correct diagnosis is vital, meaning these systems will need to be able to explain how diagnosis was reached ingrained within them, in order to be effective.
There is, therefore, much more work needed before these types of algorithm are ready for clinical use, but the goal will be to reduce screening costs and increase access and treatment, particularly in regions where such resources are not widely accessible. Fear not, though, your doctors is not going to be replaced by robot; AI is simply being used to help narrow-down the patients who really need treatment.
This article was written by Brawn Medical, one of the UK’s leading suppliers of ophthalmic equipment.