Diagnostic Error, Results Management, and Software Design

by Jerome Carter on February 29, 2016 · 4 comments

As I have spent more time thinking about clinical software design, my ruminations have become more problem-focused. I have begun to look at specific care delivery problems and how changes in software designs might help or hinder clinical work.   Lately, the problem of diagnostic error as it relates to results management has captured my attention.   Having dealt with results management headaches in practice, this is an issue that resonates with me.

When approaching a software design challenge, I like to start with high-level concepts before jumping into detailed design issues.   Here, the question is: What is the optimal way to manage test results? When designing software for clinical work, it is helpful to pose two questions. The first question: What is it about managing results that is common (or should be) across all clinicians within a specific domain?   The second question: What necessarily varies among clinicians within a domain? Creating software that helps and doesn’t hinder requires answers to all three questions.

Fortunately, there is an ever-growing body of research on EHR systems and their impact (or lack thereof) on clinical work. Considering my goal of creating a specification for the ultimate results management system, the recent IOM report, Improving Diagnosis in Health Care, turned out to be a great place to look for clues (1).

The report, released in 2015, offers a detailed look at the range of causes for diagnostic errors and includes an entire chapter devoted to information technology. One helpful discovery was a list of ways that electronic documentation might help with diagnostic errors (2). Two suggestions seemed particularly applicable to results management.

Tracking tests
Integrate management of diagnostic test results into note workflow to facilitate review, assessment, and responsive action as well as documentation of these steps.

Providing access to information sources
Provide instant access to knowledge resources through context-specific “infobuttons” triggered by keywords in notes that link user to relevant textbooks and guidelines.

These are good suggestions. The problem, of course, is how to implement such features within a software system in a way that clinicians would find helpful. What is the best way to “integrate management of test results into note workflow”? What does that mean in terms of software design?

A useful approach when looking for design principles within feature requests is analyzing the words used. Here, the key term is “management.”   Many clinical systems provide feedback to clinicians via simple mechanisms such as color or font size (e.g., abnormals are in red or in a larger font).   While these design choices might be acutely helpful to let the clinician know something is wrong, they do nothing to prevent forgetting. Alerts that require some type of acknowledgment suffer the same problem. There is nothing to prevent the “out of sight, out of mind problem.”   Both of these approaches are event-based in that they occur at a specific point in time and have no follow-up mechanism other than clinicians’ memories.   Event-based notification schemes offer little management capability or support. The shortcomings of event-based handling of results have been studied by Singh, et al. (3).

The researchers decided to study how well clinicians responded to abnormal radiology results when using an EHR with alerting capability for abnormal results. They describe their methodology as follows:

We studied critical imaging alert notifications in the outpatient setting of a tertiary care VA facility from November 2007 to June 2008. Tracking software determined whether the alert was acknowledged (i.e. provider opened the message for viewing) within two weeks of transmission; acknowledged alerts were considered read. We reviewed medical records and contacted providers to determine timely follow-up actions (e.g. ordering a follow-up test or consultation) within 4 weeks of transmission.

Their findings demonstrate that even with alerting capability, clinically significant abnormal results fell through the cracks.

Of 123,638 studies (including radiographs, computed tomographic scans, ultrasonograms, magnetic resonance images, and mammograms), 1196 images (0.97%) generated alerts; 217 (18.1%) of these were unacknowledged. Alerts had a higher risk of being unacknowledged when the ordering HCPs were trainees (odds ratio [OR], 5.58; 95% confidence interval [CI], 2.86-10.89) and when dual-alert (>1 HCP alerted) as opposed to single-alert communication was used (OR, 2.02; 95% CI, 1.22-3.36). Timely follow-up was lacking in 92 (7.7% of all alerts) and was similar for acknowledged and unacknowledged alerts (7.3% vs 9.7%; P = .22). Risk for lack of timely follow-up was higher with dual-alert communication (OR, 1.99; 95% CI, 1.06-3.48) but lower when additional verbal communication was used by the radiologist (OR, 0.12; 95% CI, 0.04-0.38). Nearly all abnormal results lacking timely follow-up at 4 weeks were eventually found to have measurable clinical impact in terms of further diagnostic testing or treatment.

A better design choice for improving follow-up behavior is one in which results are actively managed, end-to-end, from initial order to an acceptable endpoint (i.e., a normal result or an appropriate follow-up on record). A clinical system with end-to-end management of results would offer greater safety than one that simply managed abnormals since ordering a test that is never done is also a safety issue.

I once had a patient with a palpable breast mass for whom, because I was so sure it was malignant, I personally called and arranged the required studies and surgical consultation. After not hearing from the surgeon, who promised to let me know what was planned, I discovered the patient, out of fear had not done any of the studies nor kept the appointment.   Alerts for abnormals alone will not catch this type of safety issue.

A process-centric design would be the best approach for providing end-to-end results management. Such a design would allow for an explicit workflow to be designed that logged all orders and tagged them with an expiration date, which would fire an alarm should no result be recorded within the allotted time frame. Next, as results were returned, normal results would turn off expiration-date triggers, and tag abnormals for alerting. However, alerts would be just the start of managing abnormals. Alerts would require a specific acknowledgment in the form of an intended action. As part of the process, clinicians would be given to-do lists to manage required interventions and contextual access to information sources (e.g., infobuttons).

A few researchers have made a stab at a process-based approach to managing results. Murphy, et al. (4) studied results management within the context of delayed cancer diagnosis.   Note the methods used.

We performed a cluster randomized controlled trial of primary care providers (PCPs) at two sites to test whether triggers that prospectively identify patients with potential delays in diagnostic evaluation for lung, colorectal, or prostate cancer can reduce time to follow-up diagnostic evaluation. Intervention steps included queries of the electronic health record repository for patients with abnormal findings and lack of associated follow-up actions, manual review of triggered records, and communication of this information to PCPs via secure e-mail and, if needed, phone calls to ensure message receipt. We compared times to diagnostic evaluation and proportions of patients followed up between intervention and control cohorts based on final review at 7 months.

The authors conclude:

Electronic trigger-based interventions seem to be effective in reducing time to diagnostic evaluation of colorectal and prostate cancer as well as improving the proportion of patients who receive follow-up. Similar interventions could improve timeliness of diagnosis of other serious conditions.

The authors describe a process that includes querying the EHR’s data repository, firing a trigger, and contacting the PCP if proper follow-up had not been recorded. This study was done in a setting where an EHR was already in use, and yet so much of the process was managed by people outside of the EHR! As the authors demonstrate, managing processes without workflow technology is doable, but far more labor-intensive than it has to be.

EHR systems are data-centric, and as such, they can be great for event-based support of clinical work, but if process support is needed, they have serious shortcomings. Current EHR systems lack workflow capability, so do not expect sophisticated process support to appear anytime soon.

Improving diagnostic decision support for test results requires moving beyond alerts. It requires process-centric software designs that make use of tools that fit into the workflows of busy clinicians.   Results management is a process, not an event…

  1. National Academies of Sciences, Engineering, and Medicine. 2015. Improving diagnosis in health care. Washington, DC: The National Academies Press.
  2. Schiff and Bates, 2010. New England Journal of Medicine G. Schiff and D. Bates. Can electronic clinical documentation help prevent diagnostic errors? 362(12):1066–1069. 2010.
  3. Singh H, Thomas EJ, Mani S, Sittig D, Arora H, Espadas D, Khan MM, Petersen LA. Timely follow-up of abnormal diagnostic imaging test results in an outpatient setting: are electronic medical records achieving their potential? Arch Intern Med. 2009 Sep 28;169(17):1578-86.
  4. Murphy DR, Wu L, Thomas EJ, Forjuoh SN, Meyer AN, Singh H. Electronic Trigger-Based Intervention to Reduce Delays in Diagnostic Evaluation for Cancer: A Cluster Randomized Controlled Trial. J Clin Oncol. 2015 Nov 1;33(31):3560-7.
Facebooktwitterpinterestlinkedinmail

Leave a Comment

{ 4 comments… read them below or add one }

@BobbyGvegas February 29, 2016 at 4:33 PM

Cited on my blog. I’m at HIMSS16 in Vegas this week, and will certainly pump your work.

Reply

Jerome Carter February 29, 2016 at 5:40 PM

Thanks! You may want to check out the new site http://www.clinicalswift.com

Reply

jim ryan February 29, 2016 at 1:28 PM

i agree with you. i have discovered this over the past couple years using our problem/task oriented system. tasks are too fundamental of an object type to be considered an add on. the build we’re working on now follows trello’s task management model very heavily. we collected over 6000 tasks that have been generated using our first generation build and have begun to catalog these both as part of care management and an encounter ontology, but also as FHIR resource types.

Reply

Jerome Carter February 29, 2016 at 5:41 PM

Hi Jim, have you write anything about your experiences?

Reply

Previous post:

Next post: