EHR Design and Malpractice Risk

by Jerome Carter on August 24, 2015 · 0 comments

EHR adoption has increased significantly since the passage of HITECH.   With increased EHR use, as with any technology, there are bound to be unintended consequences. The same system that eliminates bad handwriting as a safety issue may introduce easily-misinterpreted screens or easily-misused default values. EHR-related malpractice cases are rising along with adoption rates. The interplay between EHR design and malpractice risk is necessarily complex as teasing out poor design choices from improper use can be difficult. From a design standpoint, malpractice data, along with safety, workflow, and human factors research represent another rich source of design hints.

The Doctor’s Company, a physician-owned insurer, through its newsletter, The Doctor’s Advocate, offers a front-line assessment of malpractice claims associated with EHR use.

David Troxel, MD, author of the feature, explains that the company used a 15-term coding system to categorize claims.   Claim history and coding methods are described as follows:

Due to the three- to four-year lag time between an adverse event and a claim being filed, however, EHR-related claims have only recently begun to appear. In 2013, we began coding closed claims using 15 EHR contributing factor codes (eight for system factors and seven for user factors) developed by CRICO Strategies for its Comprehensive Risk Intelligence Tool (CRIT).

In 2013, The Doctors Company closed 28 claims in which the EHR was a contributing factor, and we closed another 26 claims in the first two quarters of 2014. During a pilot study to evaluate CRICO’s EHR codes, 43 additional claims closed by The Doctors Company were identified (22 from 2012, 19 from 2011, and 2 from 2007–2010). These 97 EHR-related claims closed from January 2007 through June 2014 are the subject of this analysis.

EHR-related factors contributed to 0.9 percent of all claims closed by The Doctors Company from January 2007 through June 2014. User factors contributed to 64 percent of these EHR-related claims, and system factors contributed to 42 percent.

While the total number of claims is small, the trend is clear. Only two claims were made from 2007-2010, rising to 28 for all of 2013, to 26 in just the first six months of 2014.   Clearly, the number of claims is tied to increased EHR use, but I wonder how the legal community’s familiarity with EHR flaws affects the claim rate. Will more savvy lawyers lead to even more cases? It would be nice to know the product associated with each claim.

The coding system used divided claims into two categories — system-related and user-related.   System-related issues with percentages for claim categories are listed below.

10% Failure of system design.
9% Electronic systems/technology failure.
7% Lack of EHR alert/alarm/decision support.
6% System failure—electronic data routing.
4% Insufficient scope/area for documentation.
3% Fragmented EHR.
0% Lack of integration/incompatible systems.
0% Failure to ensure EHR security.

Thankfully, case examples were provided for some of the categories.   Here are a few.

Claim: Lack of EHR Drug Alert
An elderly female saw an otolaryngologist for ear/nose complaints. The physician intended to order Flonase nasal spray. Patient filled the prescrip­tion and took it as directed. Ten days later, she went to the ER for dizziness. Two weeks later, the pharmacy sent a refill to the physician at his request. It was for Flomax (for enlarged prostate)—which has a side effect of hypotension. When ordering, the physician typed “FLO” in the medication order screen. The EHR automatched Flomax, and the physician selected it. Flomax is not FDA- approved for females. There was no EHR Drug Alert available for gender.

Claim: Insufficient Area for Documentation; Drop-Down Menu
A female had a bladder sling inserted for urinary incontinence. Her surgeon was assisted by a proctor surgeon who was representing the product manufacturer and training the patient’s surgeon on the procedure. The patient was informed that another physician would be assisting. In the recovery room, there was blood in the Foley catheter, so the patient was returned to surgery. The bladder had been punctured by the sling. The proctor had approved the sling’s placement. The circulating nurse did not document the proctor’s presence in the OR due to lack of an option in the EHR drop-down menu. There was no space for a free-text narrative to document that the patient was informed of the proctor’s presence.

Claim: Drop-Down Menu
A patient was seen by her physician for pain management with trigger point injections of opioids. The physician ordered morphine sulfate (MS) 15 mg every eight hours. In the EHR, the drop-down menu offered MS 15 mg followed by MS 200 mg. The physician inadvertently selected MS 200 mg and did not recheck before completing the order. The patient filled the prescription, took one MS along with Xanax, and developed slurred speech—resulting in an ER visit with overnight observation.

The first case is a validation algorithm issue.   At a minimum, age and gender checking is important for safety. Such checks should be routinely applied for medication and test orders. Transgender patients may make the algorithms that do these checks more complex (transgender women may still have a prostate gland).

The second case has both data and interface issues while the third is more complex than just an interface problem.   In the second case, the absence of both a drop-down option and a narrative area reflects on the EHR’s data schema. Seemingly, the data elements required to support these options were overlooked during system design. Thus, no matching interface elements were present.

It is nearly impossible to anticipate every single data element that will ever go into an EHR. Providing a general “comments” area on any screen (or by way of a pop-up) is one way to make missing elements less problematic while allowing for more graceful failures. This approach solves two problems. First, it allows users some flexibility in data capture and second, it provides a type of feedback to the developer. Fortunately, comment fields are relatively easy to add.

The final case is more difficult than it seems. One could provide an alert for dosages greater than a certain amount. However, in many systems, alerts tied to drugs are turned off or filtered. In such cases, the alert would likely not be enabled. The greater question is how to trap true errors without causing alert fatigue. Here machine learning might be helpful. For example, rarely used doses would be noticed and alerts given only for those beyond the norm for that provider or site.   Of course, now we are talking about a whole new level of drug alerting.

Errors attributed primarily to users accounted for over 60% of claims.

16% Incorrect information in the EHR.
15% Hybrid health records/EHR conversion.
13% Prepopulating/copy and paste.
7% EHR training/education.
7% EHR user error (other than data entry).
3% EHR alert issues/fatigue.
1% EHR/CPOE workarounds.

Of this set, copy and paste issues accounted for 13%.   The case below illustrates the dangers typical of careless use of this standard feature.

Claim: Copy and Paste
A toddler was taken to a country where tuberculosis was prevalent. After the trip, he presented with fever, rash, and fussiness. The physician considered bug bite or flu and treated the child with fluids, antibiotics, and flu meds. His office EHR progress note indicated there was no tuberculosis exposure. The physician copied and pasted this information during subsequent office visits with no revision to note travel to a country with tuberculosis. Two weeks later, the child was diagnosed in the ER with tuberculous meningitis. He had permanent and severe cognitive defects.

How to deal with C&P issues? Obviously, they represent a convenience to users, but what is the underlying design flaw?   The more time-consuming it is to generate a note, the more likely C&P will be attractive. I have always thought that a clinical shorthand language or type of markup would be worth investigating for note-writing.   Looking back, my patient notes always followed a general format, and the majority of content in most notes consisted of normal findings.   I used a paper template to record findings when interacting with patients and then dictated my notes from that template. An analysis of my notes would have turned up a fairly limited set of common terms. Likely, this is true for many others, especially in subspecialty areas. It’s worth considering. The other approach would be to try and make C&P smarter. Something tells me the long-term payoff would be higher with a clinical markup language.

EHR training issues account for 7% of the claims. Two cases are offered in the article.

Claim: EHR Training
A pregnant non-English-speaking female with gestational diabetes was referred for an ultrasound (US) to estimate fetal weight. Her physician had planned a C-section if the baby was >4500 grams. The US report was sent by the laboratory to the hospital’s EHR. The next day, the patient went to the hospital in labor. Her physician reviewed his six-week-prior prenatal written record but was not trained on the hospital’s EHR and had no password—so he did not see the US report. He performed a vaginal delivery, complicated by shoulder dystocia that resulted in brachial plexus injury. The baby’s weight was 4640 grams.

Claim: EHR Training
A female presented to the ER with complaints of abdominal pain, nausea, and vomiting. An ovarian cyst had been removed two years prior. The emergency physician ordered an abdominal CT scan and called a gynecologist to evaluate the patient. The gynecologist reviewed a CT scan in the EHR that was later found to be the old scan showing the ovarian cyst. The patient was taken to surgery. No cyst was found, and the patient developed a MRSA infection. The gynecologist had not been trained on the new system so did not find the new CT scan that was available.

EHR training times are directly related to the intuitiveness and readability of the system. Complex menus and unexpected juxtaposition of information make it hard to learn any system. User-centered design (UCD) is the answer here; however, UCD is not a panacea.   Badly designed systems require an overhaul, not tinkering. When applied from the very beginning, UCD will likely result in the best system the designers are capable of building (design is very much a creative process). For renovating legacy systems, UCD is likely to be much less effective unless the goal is a substantial rewrite.   In the claims above, either restricting practice privileges until training was complete or building systems with much shorter training times would have likely prevented the outcomes. Longer term, more intuitive, usable systems are the better approach.

As the number of EHR systems in the wild grows, the impact of design decisions will become more evident. Case studies that tie EHR designs to clinical outcomes are very much needed to guide future development as well as formulate software engineering guidelines for clinical software. I wish the Doctor’s Company and other insurers would make all cases available as doing so would benefit patients, clinicians, and developers.

  1. Troxel DB. Analysis of EHR contributing factors in medical professional liability claims. The Doctor’s Advocate, First Quarter, 2015.

Leave a Comment

{ 0 comments… add one now }

Previous post:

Next post: