Empowering individual health systems to validate AI models
Few would have expected Epic to be a first mover in the decentralization and democratization of AI in healthcare.
Epic recently announced an open source “AI trust and assurance software suite” that health systems can use to validate and monitor ML models integrated into the EHR. This tool automates the most time-consuming and replicable steps of model validation, such as collecting, aggregating, and mapping data and metrics. By streamlining these processes, it enables health systems without robust data science teams to perform necessary validations and ensure they can use ML models safely and effectively for their patients.
What does model validation entail, and why is this software significant?
Consider the following example: a hospital interested in implementing a new readmission risk model must first ensure that the model accurately predicts readmissions for its patient population without significant biases. This process requires building a clean dataset of a sufficiently large cohort of patients and identifying the "true positives"—the patients who were actually readmitted—using a standard definition (i.e., inpatient readmission at 30 days). The ML model is then used to generate predictions for each patient from that cohort (typically a numerical score classified into "likely or unlikely to be readmitted" based on some pre-specified cutoff). The hospital must compare these predictions against the actual outcomes to assess the model's recall and precision (measures of accuracy), and potential biases across different patient subgroups.
This process is resource-intensive, requiring significant data management and analytical expertise. Epic's software automates key aspects of this process, making it more accessible for health systems that lack extensive data science resources, but can now leverage knowledge of local patient populations, clinical workflows, and organizational needs to better assess the model’s safety and effectiveness. For instance, local clinicians might know that certain socioeconomic factors or comorbidities prevalent in their community significantly influence readmission rates, and that workflow constraints require a certain level of model precision for the model to be effectively used. Real time monitoring of model performance also enables model and workflows to be quickly adjusted based on local needs.
The ability to assess the model locally empowers the users of these models — the clinicians and staff caring for the patients the AI intends to benefit — to have the necessary input in the application of AI into their work. Their input also enables the industry to gather valuable real-world experience needed to inform standards and frameworks for governing the use of AI in healthcare.
What’s in it for Epic?
It would be naive to assume that Epic does not have strategic motives behind this release. It has been interesting to see Epic’s evolution from a EHR company into a platform company, and I believe this move further solidifies their positioning as the dominant healthcare technology platform. Epic acknowledges that they may not be the sole producer of AI for healthcare, but they would like all producers and users of AI in healthcare to depend on Epic (or a standard that Epic establishes). I am interested to see the extent to which this validation tool will be adopted; given Epic’s level of scale, depth of workflow integration, and customer stickiness among health system, I would bet in their favor.
These are great insights Ron, I look forward to hearing about the details from Epic. Would love to see the extent of the data modeling to make this ‘plug and play’ for clinicians and analysts alike.