>No negativity towards AI here. It’s amazing and it’ll change the future. But we need to be careful on the way.
Yeah, I suspect a lot of fields will have a similar trajectory to how AI has impacted radiology.
It might catch the tumor in 99.9999% of cases, better than any human doctor. But missing a malignant tumor 0.0001% of the time is unacceptable, because it spikes the hospital's malpractice costs. So every single scan still has to be reviewed manually by a doctor first, then by the AI as a fallback.
In theory there's some insurance scheme that could overcome this, but in practice when you have software reviewing millions of scans a day you're opening yourself up to class action lawsuits in a way no competent human doctor would.
>It might catch the tumor in 99.9999% of cases, better than any human doctor. But missing a malignant tumor 0.0001% of the time is unacceptable, because it spikes the hospital's malpractice costs. So every single scan still has to be reviewed manually by a doctor first, then by the AI as a fallback.
I find it hard to believe human doctors miss malignant tumors in less than 1 out of every 10 million cases.
"So sorry the AI missed your malignant tumor! On average, it actually performs better than a human doctor. I mean, a human doctor definitely would have caught this one, and yeah, you're going to die, but hopefully the whole average thing makes you feel better!"
Does the opposite work too? What if a human doctor mis-diagnoses me but I can prove in court that an available medical grade AI would have given the correct diagnosis. Could I sue for that?
We acknowledge that both humans and "medical grade AI" are flawed, but they're flawed in very different ways and until we can understand how and why an AI model fails, it should be supplemental.
The standards for medical malpractice are super nuanced and variable but the general idea is the "man on the street" concept, or in this case "the average doctor" concept.
As the parent poster put it, it's only a problem if the average doc won't detect it. If it's truly a 1 in 10-million thing, an extreme edge or corner case, malpractice courts may not have a problem with you missing it -- as they say "if you hear hooves, do you think of horses or zebras?". 99% of the time a different diagnosis is the right one, and even at five-nines you're letting someone through eventually.
I always think of comparisons to aviation. There are a million and one things that can go wrong when flying a plane, but it's still one of the safest ways to travel. That's because regulations and safety standards are to such a high degree that we simply don't consider injury or death an acceptable outcome.
Whenever someone says "as long as it's better than a human", that's where my mind goes. We shouldn't be satisfied with just being better than a human. We shouldn't be satisfied with five nines! I don't really care about what courts have a problem with — my point is just that our goal should be zero preventable deaths, not just moving from humans to AI once the latter can be better on average than the former.
If also passing the scan by a human is feasible to do, and clearly it is because previously that’s what we were doing, and will reduce the error rate even further what’s the argument for not doing it?
I'm pretty sure koboll's point was that by having a doctor in the loop, the hospital can wash their hands of that one person's malpractice suit easy enough. Just fire the doctor, let their individual insurance deal with it, and move on. When the hospital cuts out the middle man, they take on a new level of direct accountability they don't currently have.
I suspect AI went that way in radiology not because of the chances of False Negatives, but because radiologist are entrenched in the system and will not yield an insanely lucrative stream of revenue.
Medical Scans are reviewed abroad. This practice started in Dentistry in the 90s/early 2000s but expanded to Radiological scans as well. At this point most CT, MRI, and XRay scans in the US have a first pass analysis done by doctors in India+Pakistan.
Medical billing has also been offshored to India+Pakistan btw
In general, a lot of back office Dental+Medical functions were outsourced in the 2000s+2010s.
> It might catch the tumor in 99.9999% of cases, better than any human doctor. But missing a malignant tumor 0.0001% of the time is unacceptable
Those probabilities are way off given biology, but anyway ...
The interesting cases of AI in radiology would be being able to catch stuff that a human has no hope of catching.
For example, a woman with lobular (instead of ductal) breast cancer generally doesn't present until mid-to-late Stage 3 (which limits treatment options) because those cancers don't form lumps.
You can stare at mammograms and ultrasounds all day and won't see anything because the "lumps" are unresolvable. You're trying to find a sleet particle in a blizzard. Sure, it's totally obvious on an MRI scan, but you don't want to do those without reason (picking up totally benign growths, gadolinium bioaccumulation, infections from IVs, etc.)
An AI, however, could correlate subtle, but broad changes that humans are really bad at catching. Your last 5 mammograms looked like this but there is just something a little off about this one--go get an MRI this time.
This seems an oversimplication of radiology. Things are not black and white, we are talking years of training on specific subjects to be able to “see” an image. I believe AI will help, but it will need supervision, at the same rime the doctors are going to get trained on the difficult border cases. Also deanonimizing data for training is a big deal. This is not happening any time soon.
ChatGPT has already found an issue with my relative in the ICU that a literal team of doctors and nurses missed. This just happened last week. Unfortunately we checked ChatGPT retroactively after we went through the screw up.
I think people probably overestimate (maybe vastly) how good at differential diagnosis most doctors are.
It will absolutely, 100%, be in place as a fallback very soon, and be ubiquitous in that role. Just as AI is now for radiology. That's different from replacing the team of doctors and nurses, though.
Even with the 1 in 10000 false negative rate, I bet someone is doing the cost calculation of risk vs how many hours it would take for a doctor to check 10000 scans. Doctors themselves are not perfect so they may even have a higher error rate.
Give the doctor an AI tool which is fast and 99.999% accurate. Since they have automation now, give them a massive workload, so they can’t reasonably check everything. Now the machine does the work and the doctor is just the fall-guy if it messes up.
I have no idea how the legal responsibility works out of a doctor misses a malignant mole. But I’d be very willing to believe that the inability to be at all legally responsible for something will be a problem with AI uptake.
This does seem like an odd outcome though, right? I guess fundamentally people will be legally/economically advantageous in some sense because the amount of insurance that an individual doctor can be expected to hold is much less than a hospital. Is the fate of humans to exist not as a unit of competence, but as a unit of ablative legal armor?
> catch the tumor in 99.9999% of cases, better than any human doctor
I don't think results are anywhere close to that in the field. If hospitals could do without radiologists, they would do so immediately. Currently, we are seeing very little technical progress due to applied statistics in the field, and the real cause of that is that tech people don't understand what we do and why we're still very much needed. The problem lies more in information retrieval capabilities than acting on the data itself.
Image recognition and statistics is already being used as a first pass for pathologists in full force today. It’s weird to pretend like this is some new uncharted frontier for medicine and/or that insurance doesn't know how to handle it…
Yeah, I suspect a lot of fields will have a similar trajectory to how AI has impacted radiology.
It might catch the tumor in 99.9999% of cases, better than any human doctor. But missing a malignant tumor 0.0001% of the time is unacceptable, because it spikes the hospital's malpractice costs. So every single scan still has to be reviewed manually by a doctor first, then by the AI as a fallback.
In theory there's some insurance scheme that could overcome this, but in practice when you have software reviewing millions of scans a day you're opening yourself up to class action lawsuits in a way no competent human doctor would.