Thank you for this post, Dr. Li! I am a fourth-year medical student going into Family Medicine.
My experience on rotations was that for many - often including myself - the incentive to achieve "Honors" or impress your preceptor with immediate multiple-choice answers, buzzwords, etc resulted in a need to "feign invulnerability" with AI-generated answers. A metric of how good you were at this game was being able to look things up, or predict what you should look up, and commit the right words to memory.
Your approach to probabilities and uncertainty is fascinating and totally turns that structure upside down. In an era when students compete for any possible edge over one another to match into highly competitive programs, how can medical schools reframe the goals of rotations? How do preceptors evaluate students appropriately, knowing that anyone can look up the "right" answer? How do we convince students that they will be evaluated on the strength of their logic, the evident drive to TRY to come up with probabilities based on their understanding of disease and tests, rather than their accuracy?
I fear that despite the great points you make here, it will be difficult for students to really believe that they should use their minds to estimate uncertainty and be okay with being wrong. Rather, there will be a new AI tool to give them the "right" answer about probabilities :_)
What do you think? If you were a clerkship or program director, how would you articulate the evaluation methods, as you describe, to students? How would you convince them that they will be graded based on their effort, rather than their consistent accuracy?
Really appreciate this - especially naming uncertainty as a feature, not a gap.
What struck me is how much it mirrors what's happening in healthcare analytics right now. As AI gets cheaper at producing answers, the actual leverage shifts to how well you frame the question: what's your hypothesis, what would change your mind, how much certainty is enough to move.
Clinical training and system-level analytics are landing in the same place. Not knowing more - reasoning better, and staying accountable to the outcome.
Thank you for this post, Dr. Li! I am a fourth-year medical student going into Family Medicine.
My experience on rotations was that for many - often including myself - the incentive to achieve "Honors" or impress your preceptor with immediate multiple-choice answers, buzzwords, etc resulted in a need to "feign invulnerability" with AI-generated answers. A metric of how good you were at this game was being able to look things up, or predict what you should look up, and commit the right words to memory.
Your approach to probabilities and uncertainty is fascinating and totally turns that structure upside down. In an era when students compete for any possible edge over one another to match into highly competitive programs, how can medical schools reframe the goals of rotations? How do preceptors evaluate students appropriately, knowing that anyone can look up the "right" answer? How do we convince students that they will be evaluated on the strength of their logic, the evident drive to TRY to come up with probabilities based on their understanding of disease and tests, rather than their accuracy?
I fear that despite the great points you make here, it will be difficult for students to really believe that they should use their minds to estimate uncertainty and be okay with being wrong. Rather, there will be a new AI tool to give them the "right" answer about probabilities :_)
What do you think? If you were a clerkship or program director, how would you articulate the evaluation methods, as you describe, to students? How would you convince them that they will be graded based on their effort, rather than their consistent accuracy?
great take. this is essentially about asking better and broader questions :)
Really appreciate this - especially naming uncertainty as a feature, not a gap.
What struck me is how much it mirrors what's happening in healthcare analytics right now. As AI gets cheaper at producing answers, the actual leverage shifts to how well you frame the question: what's your hypothesis, what would change your mind, how much certainty is enough to move.
Clinical training and system-level analytics are landing in the same place. Not knowing more - reasoning better, and staying accountable to the outcome.