Artificial intelligence (AI) is a rapidly advancing area, particularly in the region of medical technology. The current uses of AI in medicine are vast, covering automated online consultations to aid GPs to robotic arms used during surgical procedures, and it is likely to become more commonplace at work during our lifetime.

What is AI?

The term ‘artificial intelligence’ is thrown around a lot and can make things sound a little sci-fi. So what does AI actually mean? AI refers to the creation of algorithms that can be followed by a computer system, allowing that computer to draw it’s own conclusions to a question without further human input. Using algorithms saves time as it means that every possible answer to that question doesn’t have to have been previously programmed into the computer. In simple terms, it allows a computer to follow a set of instructions to come to an answer, and then learn from the outcome1.

When we think of AI, it is easy to believe that this is still a new technology in its early days of development. However, one of the earliest types of AI trialled was CAD or ‘computer-assisted diagnosis’ which was used to aid the detection of breast cancers with mammography in the 1990s1. It is also easy to assume that AI has to be complicated and not something that we would use daily in the hospital. A simple example of AI already being used in medicine is the use of Dictaphones to transfer speech to the written word, employed widely by doctors to write letters. This type of AI is known as natural language processing1. There are many different types of AI, the most basic of which is called rule-based programming. A human defines a statement and assigns it a rule or code to formulate an output, an example of which is a tool to enter the width of an abdominal aortic aneurysm giving an output of what management is advised for the patient2.

When reading the literature surrounding AI, two common themes frequently appear: ‘machine learning’ and ‘deep learning.’ Machine learning is a slightly simpler style of AI where algorithms are utilised to formulate an output3. A computer is given a set of instructions, sort of like a flow chart for the computer to follow. Data is inputted in, and the machine creates an output from the given information. These algorithms are created by people with experience in the area the algorithm is being created for so that the systems start with a good basis of knowledge4.

Deep learning is a more complex area of AI. It takes lots of different algorithms and puts them together to create a network of data. As the programme collects information, it can build further layers to the algorithms to create a more ‘knowledgeable’ tool. Deep learning aims to act more like the human brain, and as it ‘learns’ it becomes more accurate and experienced3. As these systems are able to ‘learn’, they don’t need to have a set of instructions to follow, allowing more potential for a variety of uses and the ability to problem solve4. This saves time as deep learning does not need humans to be continually adding data or new algorithms to its’ databases as it is able to build evidence and data itself4.

How can AI be used in radiology?

As the range of possible tasks that AI can carry out are so vast, there are many areas of radiology where this technology could or already is being applied.

Machine learning can be used to create statistical processes that can analyse an image and suggest possible genetic markers associated with findings on the imaging. This type of AI can be useful for simple, straightforward tasks to save radiologists time so that they can prioritise other jobs3.

Deep learning can be used to match up information from imaging to other sources, such as blood results, biopsy results and demographic information. The combining of these sources to create answers to medical questions can be used to aid Radiologists when they are unable to come to a definitive diagnosis, almost like having a colleague to discuss a case with3. This can also be used to breakdown the layers of a radiologic image of a mass and assign its appearance with a number or code to give it a quantitative label rather than purely an image to be reviewed visually4. Deep learning can essentially take an image, review specific characteristics of that image and decide which ones are most valuable or clinically significant4.

Examples of AI in radiology

As AI capabilities are so diverse, there are many possibilities for how they could potentially be used. Some examples can be seen below for how it is already being trialled and used in practice.

CT and MRI

AI can be used to analyse nodules on CT or MRI scans, particularly in the lung and liver, and characterise them based on their appearance as to whether they are likely to be benign or malignant. This means that patients with more suspicious-looking nodules can be put higher up the list for investigation so that they receive treatment sooner and potentially reduce the risk of malignant spread. These nodules may also not be the primary finding that the scan was requested to investigate to help prioritise what should be investigated first4. As mentioned previously regarding breast cancer, CAD can be used to label areas of a CT that appear abnormal, meaning that these areas appear more evident for a radiologist to detect5. The texture of masses on CT can be analysed using AI with programmes that have learnt which textures are worrying and which are not. This has been used in gliomas already and combines phenotypic information with pathology results to come to this conclusion, an area of AI known as radiomics6.

An example of the success of using AI with CT was shown in a study by Coroller et al. (2015) who investigated the use of AI to determine the likelihood that patients’ lung cancer had metastasised. The team identified 635 qualities on a CT scan that could help to answer this question. They found that using AI to predict metastases on CT significantly improved the accuracy of this prediction, not just saving time but potentially saving lives too7.


As mentioned, the early use of AI in medicine was to aid the diagnosis of breast cancer via mammography using a system called computer-assisted diagnosis (CAD) in the 1990s. It initially showed promising results for a technology-enhanced future in radiology; however, departments that used this software were found to have higher levels of incorrect results than those that didn’t8.

AI has come on leaps and bounds since the 1990s and now has many success stories in breast cancer treatment. A similar system to CAD has been used to review over 30,000 mammograms and produce a deep learning programme. Data were inputted into this programme to tell the system which of these women had then developed breast cancer in the five years after having the initial mammogram. The deep learning machine created from this information was then used to hypothesise which women having mammograms would develop cancer in the next five years, and was shown to be more accurate after learning the information taken from the initial 30,000 women. This is just one example of how a machine can learn and become more useful after further layers of algorithms are added to its repertoire based on real patients8.


A common use of AI in medicine is for planning radiotherapy locations and regimes for cancer treatment4.


An extensive study on the risk of death in the next 12 years, used complex algorithms to learn from over 85,000 chest x-rays to predict the likelihood of individuals still being alive by the end of the study by putting patients at a high risk of dying or low risk of a dying group based on their chest x-ray. Of those placed in the high-risk group, over 50% had died by the end of the 12-year study, compared to a very small number in the low-risk group8. This study showed the diversity of AI in that it can be used very broadly to determine factors related to general health as well as specifically analysing small pathologies. It can combine data from many sources, in a quicker timeframe than one human is able to, saving time and potentially better-identifying factors that may shorten an individual’s life.

Retinal photography

A company called IDx Technology have been employed in the USA to screen for retinopathy in patients with diabetes. The programme does not require a medical professional to analyse the results and has been shown to be accurate in over 80% of cases8. Tools like these can be great time savers, especially in outpatient scenarios such as yearly screening, meaning that medical professionals can devote more of their time to urgent queries.

What do Radiologists Think of AI?

Although the idea of a machine being able to diagnose and come to conclusions may feel like something that shouldn’t be relied on as it doesn’t feel like it could be as accurate as a human doctor, we forget that no single doctor will have had the same experiences in their education and medical training throughout their career. This means that even between radiologists, results may be interpreted differently, not dissimilar to AI when used4.

One journal article determined that in 2019 for a single radiologist to review the number of images requested by their hospital in one day they would need to review scan results every 3.5 seconds, which is unquestionably not achievable nor safe4. It is well known that a large part of doctors’ day to day work is computer and paperwork based, especially so for radiologists. Introducing AI for simple computer tasks would increase the amount of time that doctors spend with patients9.

A big worry for radiologists with the rise in AI is that their job will become redundant and they will be ‘replaced’ by machines8. A radiologist at Stanford University was quoted saying “AI won’t replace radiologists, but radiologists who use AI will replace radiologists who don’t”8. Technology such as the use of AI in retinopathy screening shows that radiologists are unlikely to be out of a job but have more time freed up to use in more urgent work.

One of the most challenging conflicts with AI in medicine is that when computer-generated information is relied on for the treatment or investigation of a patient’s condition if this goes wrong, or diagnoses are missed, who is to blame? Is it the radiologist’s fault for not spotting this missed diagnosis, or do the AI system makers hold responsibility? Some AI companies have already taken responsibility for results generated through their software, meaning that radiologists need not worry about legal battles in this scenario; however, responsibility for error needs to be made clear when hospitals invest in such systems8.

A survey of over 600 radiologists in Europe asked for their opinions on how AI will affect their work. Over 60% felt that it was likely to change the way radiologists interact with patients and a third of those with opinion thought it would be detrimental to that relationship. Half of the radiologists felt that patients would not have confidence in results generated purely by AI, without human input, so there is further work to build confidence in AI in medicine10.

Does AI Have a Future in Radiology?

Although there is some resistance to integrating AI in radiology, by both patients and radiologists, when we examine what AI actually is, it is already such a large part of our hospital systems. We use it to write letters, we use robots in surgery (the da Vinci robot for example) and especially during recent times with Covid-19, our GP surgeries use AI to work out who should be seen quickly using online programmes such as eConsult. Some hospitals are already using AI, such as Germwatcher, to monitor levels of infection in inpatients9. From how seamlessly AI is already integrated into our healthcare system, AI will likely continue to be used to improve the services we provide as medical professionals, rather than to replace the role of doctors. When broken down simply, the diagnosis and management of medical conditions are made using a wide range of sources from history to scans to blood tests, and adding AI into the mix will hopefully add a further knowledge base to our decision-making tools.


  1. Adams, M. and Murphy, A. Artificial intelligence [Internet].2020 [accessed 29/12/2020]. Available from:
  2. Gaillard, F. and Adams, M. Rule-based expert systems [Internet].2020 [accessed 27/11/2020]. Available from:
  3. Six, O. The ultimate guide to ai in radiology [Internet].2020 [accessed 27/12/2020]. Available from:
  4. Hosny, A., Parmar, C. Quackenbush, J., Schwartz, L. and Aerts, H. Artificial intelligence in radiology. Nature Reviews Cancer. 2018; 18(8):500-10.
  5. Wang, D. and Deng, F. Computer aided diagnosis [Internet].2020 [accessed 29/12/2020]. Available from:
  6. Knipe, H. and Idris, M. Radiomics [Internet].2020 [accessed 27/11/2020]. Available from:
  7. Coroller, T., Grossmann, P., Hou, Y., Rios-Velazquez, E., Leijenaar, R., Hermann, G., Lambin, P., Haibe-Kains, B., Mak, R. and Aerts, H. Ct-based radiomic signature predicts distant metastasis in lung adenocarcinoma. Radiotherapy and oncology : journal of the European Society for Therapeutic Radiology and Oncology. 2015; 114(3).
  8. Reardon, S. Rise of robot radiologists. Nature: Innovations in AI and Digital Health. 2019; 576(1):54-8.
  9. Anon. Artificial intelligence in medicine [Internet].2018 [accessed 28/12/20]. Available from:
  10. (ESR) ESoR. Impact of artificial intelligence on radiology: A euroaim survey among members of the european society of radiology. Insights into Imaging. 2019; 10(105).

Facebook Comments