I am maybe totally naive, but if this feature is bad, then why do all of these models even include race in the training set? Removing it would also remove the potential bias? How is self-identified race potentially relevant for disease identification? Also, I guess humans don't perform well in this task because nobody is studying x-rays for the purpose of identifying race?
The point is that if a neural network can predict the relationship when labelled with such accuracy it means this information is unavoidably encoded within the image. The result is that even when race is not labeled that the AI might still be racist by automatically (and unknowingly) learning the race.
If one only trains the model to correctly identify a disease (e.g. maximising the correct identification and minimising the error), then there is no space for the model to be "racist". I don't question that there is a signal inside the images, but someone has to change the model and training to then maliciously use that information.