Monday, March 3, 2025
HomeAlternative MedicineIS AI replacing human radiologists?

IS AI replacing human radiologists?


A report from the Osler Lecture: Artificial Intelligence in Imaging, given by Professor Fergus Gleeson on the use of AI in medical imaging. September 20, 2024, Mathematical Institute, Oxford University

Q: Is AI replacing human radiologists?

A: No. We are not even close.

But why ever not? When I saw the title of this lecture – Artificial Intelligence in Imaging – I guessed the subject matter would be that the interpretation of imaging was being increasingly performed more accurately and efficiently by AI. After all, I thought, it must be fairly straightforward. I am a little nervous about AI taking over from clinicians, but imaging? Well, that would be a great use of the technology. After all, images are just pictures, they can easily be scanned, and any abnormalities easily identified. A slightly more sophisticated version of Where’s Wally.

It seems I was not alone in thinking this. Geoff Hinton, the “Godfather of AI”, said in 2016 that radiologists would soon be superfluous to requirements. It was “completely obvious”, he said, that within 5 years deep learning would be surpassing radiologists. The situation, he said, resembled the cartoon clip in which the coyote had run off the edge of the cliff, but hadn’t quite stopped running. It was over for radiologists. AI would be taking all their jobs soon. His advice: stop training them now! (NB – that interesting clip was from 2016 and can be watched with a strangely satisfying sense of dramatic irony in more ways than one. Presently, in 2024, Geoff Hinton has resigned from Google and is dedicating his time to warning about the danger of AI and speaking of his regrets around the part he played in its development)

Professor Fergus Gleeson – a self-confessed workaholic with kindly eyes and an open and engaging demeanour – revealing that AI is a bit more tricky than we’d like it to be

Professor Fergus Gleeson, a highly esteemed radiologist specializing in the thorax, also anticipated the rise of the computerized radiologist. He knows there is a desperate need for something to speed up the reporting of scans. When he arrived in Oxford in 1992, there was one MRI scanner and ten consultants. This year, 2024, there are nine scanners and 75 consultants. As the population ages, so do the numbers of scans. Up to £390 million is spent by the NHS in England on screening for lung cancer every year. (It’s not just us. In the US a quarter of the 350m strong population had a CT scan last year.) And as a result, there are delays. Scans may take up to 3 months to be reported. A quarter of a million go unreported each month. In fact, the NHS spends over £100million outsourcing scans to get them reported. Politicians are pinning their hopes on AI to clear up the growing backlog, and money is being thrown in its direction to try and find out where to use it in the NHS. £3.5 billion is invested in AI imaging, which is the lion’s share (76%) of the budget for AI in healthcare in the UK .

Yes this image is AI-generated

Professor Gleeson was not particularly distressed by the prospective demise of his profession. Instead, he set about trying to help develop the AI which would improve the diagnosis of lung cancer, in particular. He started out full of optimistic enthusiasm. In the current Wild West that is the AI business environment, he licensed an algorithm, he got funding, all systems were go! Happy days! He got down to the serious work of researching beachfront properties in the Bahamas. Then, he began to hit snags.

At this point, he quoted Menken: “for every complex problem there is an answer that is clear, simple and wrong”. So, it was not working out quite as well as he had hoped. What were the difficulties? Well, some are too complex and technical to describe in detail. But there are many. So many. They just kept coming, and getting more and more complicated the more he explained them. But here is a selection:

  • AI is task-specific, not multimodal. Currently it can (mostly) only be programmed to spot one thing. Patients often have more than one thing.
  • Each time you use the licensed algorithm it costs approximately £4.70. Each time you add another algorithm, to screen for a second thing, there is an additional cost.
  • AI can make mistakes such as missing a tumour behind the heart on chest X-rays
  • Lung cancers don’t all look the same
  • New scanners keep being invented which have a new kind of image which the algorithm isn’t programmed to recognize
  • There is no universal normal scan: e.g. normal for an 8 year old spine is very different from a 25 year old spine which is very different from a normal 90 year old spine.
  • Some people have had previous surgery or fractures or congenital anomalies which confuse the results

And there were more…….

  • In one example he showed, cancer would have been diagnosed when there was actually an enlarged pulmonary artery due to an embolism. This means that to make the correct diagnosis they would need 2 algorithms, and if they did that they would pick up lots of incidental pulmonary emboli – possibly twice as many as currently. For example, they would detect the very tiny ones. Even so, if they found one, they may be obliged to treat it with anticoagulants, even if it was clinically silent and of no importance to the patient. The anticoagulant therapy carries risks too (we were reminded of Donald Dewar’s tragic death). Does this then mean that by using the AI there was an unintended consequence that it would lead to inappropriate treatment? And if so, how would they spot this?
  • Doctors start to place too much confidence in the algorithm. When individual radiologists’ success rates are compared with the algorithm, yes – the algorithm helps those less competent – but when the highly competent radiologists use the algorithm, they may appear to get worse. They start to think the algorithm knows better, and to trust it more than themselves, when actually the top radiologists outperform the AI. (NOTE – this is a key issue with AI in general – there might be a general raising of standards meaning there are fewer incompetent or mediocre clinicians or drivers or whatever, but the genius level might disappear as well)
  • Findings change – e.g. the appearance of covid in the lungs changed with new variants and would need new algorithms
  • Algorithms can be trained to predict the results of treatments, but new treatments come along all the time and render the algorithms obsolete
  • Algorithms can’t diagnose anything they haven’t been trained on or seen before, but humans can
  • There is some general resistance to the use of AI; a lack of trust, a fear of new technology. In San Francisco people have discovered that they can disable driverless robotaxis by placing a traffic cone on the bonnet. It’s a bit of a cliché these days, but, there is still “a conversation to be had” about use of AI.

He continued…..

  • There is a complex problem around setting thresholds e.g. what size lesion do you want to programme the algorithm to pick up?
  • Managing data is a big problem – the algorithm might interpret the image in a nanosecond, but do all the scans have to be sent to the cloud to be interpreted? Who manages the security and volume of data? Who pays for the cloud storage? etc. etc. etc.

So, Professor Gleeson is still not living in the Bahamas and AI is still a work in progress. A bit more complicated that we’d like. Here’s hoping they iron out the snags soon.

____________________________________________

Many thanks to Professor Gleeson for this extremely enlightening and fascinating talk, and also for kindly checking my text and making any necessary corrections.

If you find this interesting and would like to hear an entire talk from him, with rather more detail, click here for a recording of a talk given to the Oxford Personalised Medicine Society in November 2021

Click here for a scary but interesting talk on the dangers of AI by Geoff Hinton. He now admits in this interview at around the 15 minute mark that the use of AI in scans is slower than he expected, but he still thinks it’s on its way.

I discovered a great feature on this software – the ability to generate AI images – but I didn’t realise that after three of them I would have to pay! At least I have fittingly blown them all on this post about AI

Thanks for reading, and hope you feel better informed about this very cutting edge topic.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments

Skip to toolbar