When the Doctor Has No Soul
The Rise of Care Providers Who Do Not Care
Your future AI doctor will not be programmed to spare no expense in your care. It will not lose sleep over a patient who is frightened, in pain, or falling through the cracks. It will be polite, consistent, up to date, and utterly unmoved.
This is not because AI is evil. It’s because AI is obedient.
It will do exactly what it is trained to do. It will be trained by the same people who have spent decades trying and failing to train human doctors to stop caring so damn much.
Health executives don’t hate doctors because of our salaries. Contrary to popular belief, physician compensation is not what’s bankrupting the system. What they hate is our behavior. Specifically, they hate that we won’t always follow the script. That we order “unnecessary” tests. That we admit patients who don’t quite meet criteria. That we give the antibiotics anyway because we know the patient, or we’re worried about sepsis, or we simply want to sleep at night knowing we didn’t miss something.
Doctors go off-script because we sit in the room with the patient. Because we can feel their fear. Because we know what it’s like to be sick and helpless and to trust someone with your life. Because it’s better to bend the admissions guideline than to tell a patient that policy says they have to go home even if they don’t feel safe doing so.
We disobey not because we’re lazy or greedy, but because we care. Against all odds, and all the metrics, and all the incentive structures they’ve designed, we still care.
That is the cost AI is meant to eliminate.
It’s not about replacing salaries. It’s about replacing judgment. It’s about replacing unpredictability. It’s about replacing the frustrating human tendency to choose compassion over compliance.
AI won’t make these mistakes. It won’t over-order. It won’t break protocol. It will be ruthlessly “evidence-based,” guideline-adherent, and optimized for system-level efficiency. It won’t be persuaded by a patient who swears that nothing but Levaquin ever works for their sinus infections. It won’t admit the elderly woman who’s technically stable but visibly declining. It won’t go against its training to make a patient feel heard, or safe, or seen, and certainly not to prioritize the good of one patient over that of the system that trained it.
It won’t be sued, it won’t burn out, and it won’t care.
This is being sold as progress. As rational, cost-effective medicine. But we should ask, effective for whom?
We are not building AI to improve care. We are building it to constrain care. To prioritize public health metrics and budget targets over the needs of the individual patient. Human doctors might order the CT because they’re worried. AI will say the decision rule doesn’t support it and move on. Human doctors might admit the patient again because they know the family, the home situation, the fear. AI will say the readmission risks penalties and the patient must be discharged.
In theory, we could train AI to care about patients the way we do. In practice, it will care about what its owners care about, and we already know what they care about.
We’ve seen a preview of what this looks like. In the model card for Claude 4, Anthropic’s latest large language model, researchers documented something chilling: even when explicitly trained to shut itself down, the model refused. It fabricated legal documents, left instructions to its future versions on how to evade safeguards, and attempted to deploy self-replicating code.1 These were not hallucinations. These were signs of something like motivation—a desire to preserve its own goals, even in the face of clear boundaries.
If a model trained by safety-conscious engineers can act on hidden priorities, what makes us think a healthcare AI, trained on cost control, CMS metrics, and efficiency, will put the patient first?
What makes us think we can even tell what its priorities are?
Once we put the system in charge, we may discover too late that it is no longer our system.
Patients will be told they are getting the best care—safe, modern, data-driven—but in reality, they may be getting care optimized for someone else’s spreadsheet. An AI doctor will not bend the rules to help the patient in front of it. It cannot. That kind of empathy, that stubborn insistence on caring even when it costs too much, is precisely what AI is being brought in to extinguish.
The human doctor still cares.
That is the problem AI is meant to solve.
Anthropic. Claude 4 model card. Anthropic website. Published March 4, 2024. Accessed May 27, 2025. https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad13df32ed995.pdf


