That's the reason rads never train to determine race from a chest x-ray.
BTW, models don't need to train that either. Because if it's important, it's recorded, along with a picture, in the guy's medical record.
I'd just like to gently suggest that determining someone's race from an X-Ray instead of, say, their photograph, is maybe not how we should be burning training cycles if we want to push medical imaging forward. Human radiologists had that figured out ages ago.
You're being snide about the Harvard/MIT researchers being idiots doing useless research because they don't realize radiologists can just look at the patients face, but that's obviously not what happened. They were trying to see if AI could introduce racial bias. They're not releasing an AI to tell radiologists the race of their patients.
According to the article, human experts could not tell race from chest X-rays, while the AI could to do reliably. Further, it could still determine race when given an X-ray image passed through a low pass filter or high pass filter, showing that it's not relying solely on fine detail or large features to do so.
Firstly, that doesn't tell us whether there is bias in data. That tells us whether or not there is bias in their data.
Secondly, it tells us it can train to spot things that human rads do not train to spot. It tells us nothing at all about whether or not an AI can train to spot things a human rad also trains to spot, but can't.
Human rads don't train to spot race. Why? Because they don't need to do so. Human rads do train to spot pathologies in as early a stage as possible. I've never seen an AI spot one at an earlier stage than the best human rads can. But I have seen several AIs fail to spot pathologies that even human rads at the median could spot.
That's the state of play today. And it's likely to remain that way for a long, long time. Human rads will be needed to review this work not because they are human, but right now, it's because human rads are just better. At the top end, human rads are not only better, but are manifestly superior.
They aren't studying bias in data. They were studying bias in AI. The data used was 850,000 chest x-rays from 6 publically available datasets. They aren't studying whether this dataset differs from the general public or has some kind of racial bias; that's irrelevant to the study.
> it tells us it can train to spot things that human rads do not train to spot
You're kidding yourself if you think you could determine someone's race with 97%+ accuracy from a chest x-ray if only you trained at it. The study authors (who are themselves a mix of radiologists and computer scientists) claim that radiologists widely believe it to be nearly impossible to determine race from a chest x-ray. No one is ever going to try to train radiologists to distinguish race from chest x-rays, so you'll always be able to hold out hope that maybe humans could do it with enough training. But your hope is based on nothing; you don't have a shred of evidence that radiologists could ever do this.
> I've never seen an AI spot one at an earlier stage than the best human rads can.
According to the article, AIs aren't trained to do this, because we don't have datasets to train this. You need a dataset where the disease is correctly labeled despite the best radiologists not being able to see it in the x-ray. Trained with a good enough dataset, they'd be able to see things we miss.
Rads can see a person's race. We look at them.
That's the reason rads never train to determine race from a chest x-ray.
BTW, models don't need to train that either. Because if it's important, it's recorded, along with a picture, in the guy's medical record.
I'd just like to gently suggest that determining someone's race from an X-Ray instead of, say, their photograph, is maybe not how we should be burning training cycles if we want to push medical imaging forward. Human radiologists had that figured out ages ago.