How AI has changed my clinical teaching as an attending physician
How do we train new physicians when medical knowledge and clinical reasoning are no longer scarce resources?
I start every attending block with medical trainees the same way now. After we finish introductions, I pull out my phone, open an AI application like OpenEvidence, and tell the team: I’m going to be looking things up today. I expect you to do the same.
We all recognize the reflex: that moment of hesitation, even embarrassment, when someone asks you a question and you don’t know the answer. In medicine, this reflex runs deep. We’re trained in a culture where knowledge is currency, where the physician who can recall the landmark trial or the rare diagnosis from memory commands a particular kind of authority. Pulling out your phone to look something up, especially in front of a patient, feels like a crack in that aura. It signals that maybe you don’t know as much as you’re supposed to. So instead, we hedge and deflect, and often attempt to instead answer a different question to showcase what we do know, a tactic that is easily recognized, including by patients.
I want the medical trainees to feel that it’s safe to not know. In fact, it’s expected. What’s not acceptable is pretending you know when you don’t, or letting an unanswered question quietly disappear.
One of my main jobs on rounds is to listen for those moments — the lingering question, the thing that doesn’t quite fit, the detail someone glossed over because they weren’t sure. I try to catch it, name it, and then answer it together in real time. Now that the barrier to finding out is drastically lower with AI, there really isn’t any excuse to not look up something we don’t know.
Teaching the cognitive work that AI won’t replace (yet)
Medicine is traditionally taught through pattern recognition. You learn an illness script — a constellation of symptoms, a differential diagnosis, a treatment algorithm — and your job is to match the patient in front of you to the right script. This produces a particular kind of associative thinking that is common among physicians, and those who are best at making these associations are lauded as “master diagnosticians” (think House, MD).
This is precisely the type of cognitive task that will be commoditized by AI.
Take a patient who presents with a possible cellulitis, a common skin infection. Traditional teaching rounds may consist of discussions (and pop quizzes) on the key words associated with cellulitis (e.g. erythema, tenderness, well demarcated borders, leukocytosis), the indications for IV antibiotics, and red flag symptoms for more severe infections such as osteomyelitis or necrotizing fasciitis that should trigger subsequent steps to also memorize (MRI, surgical consult, etc.). A quick search on any LLM would yield this checklist instantly, which is precisely why being able to recite it on rounds no longer constitutes expertise.
What I try to teach instead is to think more like a curious scientist rather than a pattern-matcher.
Take the same clinical scenario: a patient comes in with a red, painful area on their leg. I ask the team: picture the anatomy. Skin, subcutaneous tissue, fascia, muscle, bone. Now — where do you think the pathology is? Which layer? What’s your hypothesis?
Then we stress-test it. What exam findings would increase or decrease the likelihood of that hypothesis? What’s your level of certainty — and I make them commit to a number. Are you 50% confident? 80%? 95%? The number matters, because the next question is: is that enough certainty to act, or do you need more information? If you’re only 60% sure the infection is superficial, maybe you need an ultrasound to rule out an abscess. If there’s even a 10% chance the process involves the deep fascia, that changes your urgency and your surgical consult calculus entirely. And the test you order has its own accuracy, so now you’re reasoning about uncertainty on top of uncertainty.
This is how scientists and engineers think. Observe, assess the fidelity of your data, construct a hypothesis about the underlying mechanism, quantify your uncertainty, decide whether to act or gather more information, and then build a framework for how you think the patient gets better. That last part is important to be honest about: your treatment plan is, at some level, your best guess. It’s an informed guess, grounded in evidence, but it’s still a prediction about a complex biological system under uncertainty.
The culture in medicine often treats uncertainty as something to hide or resolve as quickly as possible, not something to name and manage. When I ask a trainee to commit to a probability, I'm doing something that feels unnatural to many, but is what I believe to be a distinct quality that remains within the purview of a skilled physician in a time when AI will beat out any master diagnostician at pattern matching: the ability to name uncertainty, hold it transparently, and still earn the trust of the patient whose life depends on our decisions.
The skill that won’t be automated: taking accountability
The physician’s job was never really to be the sole source of clinical knowledge even though our training system was built as if it were. The real job is to own the outcome. Someone has to integrate the evidence, the patient’s context, the team’s capabilities, and the system’s constraints into a plan that results in high-quality care and a safe discharge. And things don’t go as expected, they have to take responsibility, escalate to the right resources, and course-correct.
As physicians, AI will help us construct our reasoning, challenge our hypotheses, surface evidence we miss, and flag when our plans seem inconsistent with the data. But we need to be the architects of and own the reasoning framework, which empowers us to take the accountability needed to sit at the patient’s bedside with honesty and confidence to say I’m not sure what the best course of action is yet, but I am here to figure this out with you, because I care about what happens to you.
What I’m evaluating for in trainees
When I think about who is going to thrive in this new world, I’m not primarily looking at the medical trainee who knows the most. The knowledge playing field is leveling fast, and that’s not a bad thing in my opinion.
Instead, I look for the individual who chases down the loose thread and follows up on the thing that didn’t quite make sense. Who stays with the patient’s problem even after the initial plan is made, watching for the moment when the hypothesis starts to break down? Who takes ownership of the hospital stay — not just making the diagnosis, but the whole arc of getting a patient safely home?
Curiosity, drive, and accountability have always been important in medicine, but they used to be masked by the knowledge hierarchy; the physician who could recall the most obscure fact earned a certain kind of credibility, regardless of whether they followed through on the plan. Now that AI is democratizing access to knowledge, the traits that were always the real differentiators are becoming impossible to hide behind anything else.
This is what the future of medicine will be built on.
