Bugs and EHR Systems: Engineering Matters…

by Jerome Carter on July 4, 2016 · 0 comments

Testing is one of the most tedious and difficult aspects of software development, and the more complex the system, the more problematic the testing.   Bugs always happen. However, if one is lucky, bugs are easily spotted and have few deleterious effects.  Unfortunately, bugs occur in many ways that are hard to spot—miscoded algorithms,  incorrect SQL queries, improper ordering of parameters—to name a few examples.  Of course, software developers recognize this problem and do the best they can to address it.  Unit testing is de rigueur these days with every decent development environment providing testing support.   While all forms of testing are helpful, they cannot completely eliminate bugs because testing has limitations.  It is impossible to replicate all the possible pathways through a software system and all the possible conditions that users might encounter.

Connections between major system components are a source of errors.  For example, UI-algorithms or algorithm-database communications can create hard-to-find bugs.   I remember one bug that returned incorrect query results seemingly randomly because the generated SQL string occasionally had an empty string.   Data types in programming languages and DBMS may differ, and translating between the two can lead to bugs while things are (or seem) fine on each end.  Such problems can take hours to locate because they are so subtle. 

Bugs came to mind while reading about CPOE vulnerabilities.   Slight and colleagues (1) put together a series of test cases for CPOE systems based on their review of 60,000+ error reports. 

We identified a total of 338 error reports as potential candidates for test scenarios and narrowed these down by combining similar scenario types (i.e., orders for drug to which patient was allergic) and prioritized based on preselected criteria of (a) frequency, (b) seriousness, and (c) testability. We then attempted to determine the extent to which current CPOE systems were vulnerable to similar errors. These test scenarios described 13 categories of erroneous or problematic orders arising from realistic clinical encounters including: wrong drug, amount, dose, route, units, or frequency errors; omission errors; duplicate drug or therapy; adjacency errors; drug allergies; drug-drug interactions; and drug-disease contraindications.

Between August 2011 and March 2012, 13 CPOE systems were tested at 16 sites.  The results are telling.

We found an array of CPOE systems often failed to detect and prevent previously documented and potentially dangerous medication errors. The generation of electronic alert warnings varied widely between systems, and depended on how the order information was entered into the system (i.e., in a structured or unstructured way), whether a specific alert functionality (e.g., duplicate-drug checking) was operational in the system, and which drugs or drug combinations were included in the CDS algorithms). The wording of alert warnings was often found to be confusing, with unrelated warnings appearing on the same screen as those more relevant to the current erroneous entry that was made. The timing of alert warnings differed across CPOE systems, with many dangerous drug-drug interaction warnings displayed only after the order was placed. Alert warnings also varied in their level of severity in different systems and even within the same institution (outpatient vs inpatient system). Testers demonstrated a variety of workarounds which they had discovered (and used in their practice) to enter such erroneous orders such as (i) using the “other” option, (ii) making free text entries in the special instructions or comments field, (iii) changing the default settings, and (iv) selecting “off formulary” drugs. Thus, “free text” represented both a blessing (ability to overcome frustrations in entering desired orders, and communicating intent directly with pharmacy) and curse (circumvented CDS safety checks).

Testing revealed a range of CDS protections that were either switched off or non-existent in the different CPOE systems.

While these findings are from systems as they existed four years ago, they are still pertinent because all systems were operational at the time of testing.  These systems were in production and had passed vendor testing protocols, making for serious patient safety issues.   If you are wondering why findings from four years ago are meaningful today, the answer lies in what is being reported—many are bugs.   But what kind? Are the issues found a result of data type mismatches, miscoded terms, incorrect algorithms, incorrect SQL queries, UI-algorithm glitches or something else?  Unlike features/functionality, which one can rightly assume will change over time, bugs stay until someone discovers and removes them.     Further, the discovery of these bugs was due to a narrowly-focused testing approach based on prior evidence of error types—they were unlikely to have been found through serendipity.   

Clearly these bugs survived vendor in-house testing, so what is the answer?  No vendor has the time or money to test for EVERY possible type of bug/error.  Even if they did, they would be limited by ignorance.  After all, how do you know what you don’t know?   Certification testing is not the answer because it only tests for features, not the quality or coherence of the underlying system.  User-centered design, per se,  is not the answer either.  What does make sense as an initial approach is a set of vendor-independent test cases that can be used to test every CPOE system.  Another thing that might help is better after-market surveillance where every error condition or bug is immediately captured and reported.  But there is a much larger question here: Are there ways to test complex clinical systems such that errors can be trapped and fixed prior to production?  There is no mechanism to enforce quality requirements, but clearly quality issues exist.   

Imagine someone being able to start a company that builds commercial aircraft and never have to conform to any external quality requirements.   Standards for aircraft have developed over decades of testing and studying accidents and mishaps. Aerospace engineers are professionals dedicated to aircraft design, and they help to develop standards.   Clinical software is complex and is becoming an essential component of the healthcare delivery system.  I find it amazing that there isn’t a group of professionals dedicated to assuring the quality of clinical software.  We need engineering standards for clinical software systems.    This is not a call for regulations; rather, it is a call for a more formal approach to clinical software design and development. 

Currently, we lack any sense of the best design choices for any aspect of clinical systems. There is no manual to consult for the ideal UI for neonatal nurses, or a frequency distribution of errors likely to occur with EHR problem lists or a guide for drug interaction algorithms.  Just about everything known about complex clinical systems is proprietary, but intellectual property claims are not antagonistic to engineering principles. Aerospace engineering holds no threat to Boeing.  There was a time when EHR systems were mostly talked about, but not used.  Those days are past. Now we live in an era where lives depend on clinical systems.   

The issues at hand go beyond usability and convenience to quality and safety—we need clinical software engineering as a discipline and formal guidelines for clinical software quality.   General guidelines for software quality do exist. However, they are general and not tied to the peculiarities of clinical systems.   We need research, guidelines, best practices, and some type of conformance process for clinical software quality.    

Look up the definition of  “EHR” and one will find something that describes an electronic system used by clinicians in caring for patients; that is, it is described in terms of features and functionality.  Usability testing and UCD can address feature/functionality concerns. But safe, trustworthy software has engineering concerns—quality, security, maintainability, scalability, reliability— that are below the level users see.  Lives are involved. The stuff below the surface counts.

  1. Slight SP, Eguale T, Amato MG, Seger AC, Whitney DL, Bates DW, Schiff GD. The vulnerabilities of computerized physician order entry systems: a qualitative study. J Am Med Inform Assoc. 2016 Mar;23(2):311-6.
Facebooktwitterpinterestlinkedinmail

Leave a Comment

{ 0 comments… add one now }

Previous post:

Next post: