Dashboarding for Machine Learning-based Clinical Decision Support Implementation and Monitoring Toolkit

*Free registration is required to use the toolkits provided within HIPxChange. This information is required by our funders and is used to determine the impact of the materials posted on the website.

Background

Machine learning-based clinical decision support (ML-CDS) tools hold enormous promise to simultaneously improve clinical care and reduce clinician burden. ML-CDSs can distill the abundance of data in modern electronic health records (EHRs) into an actionable recommendation to providers to inform medical decision making, and they can do so without the provider needing to perform dozens of time-consuming screenings for myriad patient conditions. In this way, ML-CDSs can streamline the process of identifying the right care for the right patient and presenting that information to providers at the right time. However, many challenges with ML-CDS still exist, and these tools require monitoring during and after implementation to ensure that they are functioning correctly. Threats to correct functioning include both endogenous and exogenous factors such as seasonal changes in health conditions, regulatory demands, and workforce pressures, all of which can cause upstream and downstream changes to the work system that alters the efficacy of ML-CDS interventions. Recently, the need for robust monitoring of ML-CDS interventions has been a topic of discussion (Bedoya et al, 2022).

This toolkit provides one possible response to this need. It contains a rough template for a dashboard to monitor ML-CDS tools during and after implementation. This dashboard was developed in the context of a ML-CDS intervention in the Emergency Department (ED) to identify patients at high risk for an outpatient fall and prompt providers to refer qualifying patients to a Mobility and Falls Clinic in the health system to ameliorate the patients’ fall risk. The dashboard we developed takes a “zoom-in/zoom-out” approach that allows us to zoom out and track both overall patient flow through the referral process as well as zoom in at specific decision-making points. We use an alluvial plot to get a bird’s eye view of how many patients are completing the referral visit as well as when and why patients are not completing the referral. At the individual decision points, we use statistical process control charts to distinguish between statistical noise and signal to identify data drift or other endogenous threats to our process.

Data Model and Code Flow

The data needed for similar dashboards will vary by the ML-CDS being monitored, but generally, data should be at the patient encounter level of cardinality and follow the CDS recommendation from the time the patient was identified through to either clinical completion (e.g. a completed referral) or abandonment (e.g. a patient being determined to be contraindicated for the recommended clinical therapy after further clinician review). In this sample dashboard, we used data from the EHR, supplemented with manual review data from REDCap. Data were extracted from the EHR, a combination of free-text and discrete data were parsed using regular expressions to guess the most likely patient outcome (e.g. screened out due to being in hospice care), but that guess was then imported into REDCap where implementation team members reviewed the patient’s chart to verify or correct the patient outcome. EHR and REDCap data were merged again as part of the dashboard code.

Code for the dashboard is modularized into four primary files. First, the sqlgen file contains R functions to compile dynamic SQL to pull the necessary data from the EHR. These functions are not used in this example but are used in our production dashboard. The data_io file contains R functions for retrieving and cleaning data. For the sample dashboard, data are contained in csv files, but we retained as much of the code for data parsing as possible to demonstrate the blending of EHR and REDCap data. The graphs file contains functions for several of the graphs used throughout the dashboard. Finally, the RMD, or RMarkdown file contains the code to generate an html dashboard using the R knitr package. For multi-site implementations, it can be useful to script out multiple sites at once using a file similar to the Script Dashboards file. In our production version, this file also handles uploading new data to REDCap.

Who should use this toolkit?

This toolkit is intended for clinical informaticists and clinical decision support (CDS) managers to monitor new CDS tools, especially those based on Machine Learning algorithms.

What does the toolkit contain?

This toolkit contains:

  1. R code using RMarkdown to make a reproduceable dashboard
  2. CSV files of fake data to demonstrate the necessary data input for the dashboard
  3. Data Dictionary

How should these tools be used?

The materials in this toolkit can be used to:

  • Start your own ML-CDS monitoring dashboard for specific ML-CDS tools
  • Incorporate alluvial plots or control charts into existing R-based dashboards

Customizing this toolkit

This toolkit is only a template, and you will need to use data scientist resources within your organization to customize this code to identify and quantify outcomes germane to your specific ML-CDS interventions. The code is modularized in such a way to make it relatively to adapt the content for use in your organization.

We hope that this configuration and toolkit inspires useful ideas for how to monitor your ML-CDS interventions, and we are happy to collaborate or answer questions.

Please send questions, comments and suggestions to HIPxChange@hip.wisc.edu.

Development of this toolkit

The Dashboarding for Machine Learning-based Clinical Decision Support Implementation and Monitoring was developed by researchers and clinicians (Principal Investigator: Brian W Patterson, MD, MPH) from the Berbee-Walsh Department of Emergency Medicine at the University of Wisconsin-Madison School of Medicine and Public Health.

This project was supported by the Agency for Healthcare Research and Quality grant number R18HS027735 (Patterson, PI). Additional support was provided by the University of Wisconsin School of Medicine and Public Health’s Health Innovation Program (HIP). The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality or other funders.

References

Toolkit Citation

Hekman DJ, Maru AP, Patterson BW. Dashboarding for Machine Learning-based Clinical Decision Support Implementation and Monitoring. Berbee-Walsh Department of Emergency Medicine at the University of Wisconsin-Madison School of Medicine and Public Health. Madison, WI; 2023. Available at: http://www.hipxchange.org/CDSDashboard

About the Authors

Dann Hekman is a data scientist with the BerbeeWalsh Department of Emergency Medicine and involved in many projects related to emergency department operations and quality measurement.

Apoorva Maru joined the BerbeeWalsh Department of Emergency Medicine in September of 2017 as an Associate Research Specialist. He graduated from California State University, Fullerton with his bachelor’s in biological sciences and sociology. During his undergraduate study, Apoorva was a supervisor at the Social Science Research Center working on multiple policy-impacting projects ranging from transportation issues to childcare. Outside of work, Apoorva enjoys reading and spending time in nature.

Brian PattersonDr. Patterson, MD, MPH is an Assistant Professor with the University of Wisconsin – Madison BerbeeWalsh Department of Emergency Medicine and Physician Director of Predictive Analytics for the BerbeeWalsh Emergency Department at UW Health. Dr. Patterson’s research aims to use informatics approaches, including machine learning for risk stratification and computerized decision support, to improve older adults’ transitions to outpatient care following ED visits. His current research, funded by an AHRQ R18, aims to use these methodologies to identify older adults at high risk of falls and improve their care both in the ED and after discharge.