top of page

Revisiting Radiology: AI’s impact in practice

Updated: Mar 11



Remember five or six years ago when computer vision was the ‘hot’ focus area for AI investment? Radiology was one of the fields that was ripe for transformation. There were numerous articles speculating that AI would replace radiologists. Or at least radiologists using AI would replace radiologists who were not using it (to quote that horrible trope that seems to get applied everywhere). 


Looking back at what has actually happened in the field of radiology provides some clues as to what we might see happen in other areas. While lessons are never fully transferable because they are domain specific, there are still interesting insights that can be gleaned. This article in European Radiology, The emperor has few clothes: A realistic appraisal of current AI in radiology , sparked a few ideas that have (magnetic) resonance. 


Complexity Overhead

Complexity overhead is “a phenomenon where technical advancements inadvertently add complexity, increasing costs, safety concerns, and ecological impact.” (Huisman et al, 2024). This was a new term for me, but it aligns with an idea I have been thinking about, which I call the law of diminishing technological returns. My thoughts are premised on the theory of diminishing returns where we see an upside or gain to a certain point, but then it diminishes or moves into a liability. We can think of the classic example of ice-cream cones - eating one or two might be great, but eating eight is a recipe for getting sick. I speculate that technology has a similar effect. This is a insightful but long post that goes into greater detail on this idea. Complexity is one way that a diminishing return could happen because the overhead costs of managing within a more complex socio-technical system are rarely accounted for in the analytical case to adopt a new technology.


Actual impacts and unintended consequences

AI was supposed to reduce work in radiology - maybe even eliminate the need for radiologists. Instead, it has contributed to complexity overhead in radiology by increasing workloads. One way this has happened is that these diagnostic AI tools have identified more false positive cases during screening, thereby increasing the need for human oversight from actual radiologists.


As the paper states “by adding narrow AI tools on top of current workflows, no upfront costs are saved since radiologists still must conduct full reads and maintain oversight.” (Huisman et al, 2024)

In addition, the tools themselves are not always better at detecting abnormalities. They tend to benefit generalists more than specialists. One study found that “radiologists at all experience levels performed worse in terms of speed and accuracy when misled by incorrect AI mammogram results” and that for specialists there are often no gains to be had by using AI for diagnostics. This finding echoes studies for generative AI that show similar results whereby more highly skilled workers have seen negative impacts rather than gains.


Tools we need vs tools we build

Radiologists are seeing some benefits from AI that is not focused on diagnostics. Some of what is actually beneficial relates to AI systems that can help complete administrative tasks such as patient summaries, automated protocolling and pre-clinical tasks. What is less helpful is the actual diagnostics, and yet, that has been the focus for AI vendor tools in radiology.


So, why focus on diagnostics? I would argue it’s because we could and because market incentives steered in this direction. 

Computer vision was a driving force of AI investment during the mid 2010s to early 2020s. It was Alexnet that spawned the focus on convolutional neural networks and deep learning. This drove VC interest in AI which was fixated, to a larger degree at that time, on visually oriented tasks, including a big fixation on autonomous vehicles. This is the classic hammer in search of a nail techno-solutioning. We are now in another phase where generative AI, an extension of natural language processing, has captured our collective attention (because Attention is all you need, right? - 😀). 


Automation bias is also an issue noted in that paper. This refers to our propensity to over rely or over trust machines, even when there is evidence to suggest the machine might be wrong. It has a deep history in aviation, but it's a phenomenon that is prevalent when we have human-machine interactions. It’s another area that I’ve been very interested in better understanding and have spent some time trying to unpack.


Business lessons from Radiology

Radiology is a useful domain to look at because we have some history to dissect in terms of  comparing projected vs actual impacts to date. We see that a combination of AI research advancements mixed with business imperatives to create solutions to problems that were technically solvable, but that missed the mark in driving actual business value. In fact, they contribute to increased costs. As the paper calls out: 


“We should be aware that focusing on supportive imaging interpretation alone will only lengthen the implementation gap, since the business case for this particular group of applications is rather weak. Ultimately, the costs of these software packages must be borne either directly by the department or indirectly by the patient through insurers, who are hesitant to fund AI without strong evidence of downstream cost savings.” (Huisman et al, 2024)

Companies who are evaluating the use of AI (generative or otherwise) in their business should start with their business problem, NOT the technical solution. Don’t go looking for ways in which AI might improve your business. Instead, evaluate your business for ways in which it can improve, and then see if, where and how it makes sense to use AI tools. From there you can decide on what particular tools might be the best fit. The tools you need might not be the tools the market has to offer. In doing this evaluative work, talk to frontline stakeholders. Find out how their actual work might intersect with AI rather than relying on reports of productivity gains or just assuming there is always upside. 


By Katrina Ingram, CEO, Ethically Aligned AI

 

Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com     

© 2024 Ethically Aligned AI Inc. All right reserved.


bottom of page