Fixing EHR Usability Requires More Than Doubling-Down on Usability Testing and UCD

by Jerome Carter on October 31, 2016 · 5 comments

The rise of scribes is but one sign that many EHR systems, as currently designed, make clinicians less productive and patient interactions more awkward. The main ways touted by ONC and most observers to address usability issues focus on user-centered design and more comprehensive usability testing.   However, can these methods alone actually address clinician complaints?   I think not, and the reason is the complexity of the tasks that EHR systems must support.

Unlike simpler information systems such as e-commerce sites, music streaming services, or applications such as word processors, EHR systems are intended to handle a wide-range of data types and support users performing varying sequences of complex tasks. As advanced by ONC and others, EHR systems, in addition to recording and presenting clinical data (i.e., basic paper chart functions), must also assist with clinical decision-making and quality improvement. The bottom-line: It is far easier to create objective usability measures for an e-commerce site than to create similar metrics for a system that can be used by doctors, nurses, dieticians, and respiratory therapists with equal aplomb. Every clinical professional has specific information needs and unique workflows, and complex tasks require sophisticated software systems.

It is reasonable for Amazon to offer the same interface to every user because all site visitors are doing the same things. EHR systems and clinical work present an entirely different set of circumstances.   Creating a standard set of usability guidelines or UCD requirements that would capture ALL possible (or even the top 10 per clinician type) clinician workflows is a herculean task and one for which little requirements-quality guidance is available. While there have been many papers on clinical workflow, few offer anything resembling objective information that maps clinical work to formal representations suitable for software engineering requirements.

One need only look at the comments (laments???) of Ratwani (1) and colleagues in their discussion of the difficulties in comparing usability evaluations. The authors describe three barriers to usability comparisons. However, for those willing to admit it, they are also describing why extant UCD and usability approaches are difficult to apply to complex clinical systems intended for a range of users. Let’s look at each, in turn.

Barriers to comparing the user-centered design process
Although many mistakenly believe that usability is simply determined by the design of the visual display, or user interface, a rigorous user- centered design process is based on a deep understanding of how front-line clinicians conduct their cognitive and task-oriented work, and leveraging this knowledge to guide design and development of the product.

While the authors correctly cite the need for a detailed understanding of clinical workflows, they ignore the fact that such knowledge does not exist. Further, human factors experts, clinicians, informaticists, and software engineers have no shared standards for deconstructing clinical workflows or for representing them visually (see Workflows with Friends…). Moreover, even if detailed clinical workflow information were available, it would first have to be broken down by clinician type, THEN turned into requirements for EHR design, and only after these two steps would it be appropriate for setting UCD standards or performing usability testing.

Mapping clinician workflows in a manner suitable for EHR design and UCD is a major research project, and the latest work funded by AHRQ shows there is still a very long way to go (see NIST and AHRQ Workflow Reports: A Few Observations).   If, as the authors (who are experts in clinical usability/UCD) contend, deep understanding and mapping of clinical workflows are essential for better UCD/usability evaluations, then it only makes sense to prioritize workflow research and standardization of workflow concepts, terms, and methods before pouring more effort into UCD and usability evaluation, right???

Barriers to comparing vendor usability test results
Since vendors are required to conduct summative usability tests, one might hope that comparing metrics from these tests could serve the purpose of providing purchasers with greater insight on the usability of the products. The metrics from these tests include error rates along with measures of user efficiency, effectiveness, and satisfaction with the product being tested. These usability tests are intended to serve as a final safety check once the product has been designed and developed.

There are, however, several barriers to comparing the summative test results from vendors. There are recommended testing scenarios, but no standard testing scenarios are required for certification.

The features and functions of EHR systems are accessed via menus, which is a major source of clinician misery (along with checking too many boxes). EHR systems interfere with clinicians’ workflows by forcing clinicians to map their work habits onto EHR navigation trees. This required mapping on the part of clinicians is NOT simply a matter of poor usability; rather, it is a serious and fundamental EHR design flaw. Clinicians, when trying to perform common clinical work activities such as finding a lab value, entering a note, or ordering a test have to wander through a maze of EHR menu items to achieve their goals.

EHR navigation paths are hardcoded, so every clinician, regardless of what he/she does, must use the same pathways, thus, there is no recognition of the unique needs of each type of clinical professional.   Training hours are spent memorizing the navigation tree. As long as support for clinical work remains dependent on memorization of navigation paths, EHR systems will have poor usability–the elephant in the room that is inexplicably being ignored.

How can it be possible to standardize UCD processes and produce meaningful results across systems when the underlying systems offer different paths to the same features and functions? Comparisons are necessarily apples to oranges, to kumquats, to pears, to grapes.

Error rates and the time to complete tasks have built-in assumptions about the steps, data, and resources required for successful completion. How many steps should a task require? What data and resources should be used during those steps? In the absence of task-oriented workflow standards, there are no objective answers to these questions.   Moreover, absent such standards, gross measures like time to order a lab will be misleading. If, for example, one system offers rich decision support and feedback and another does not, then time differences may hide the fact that the more helpful, safer system requires slightly more time. Accordingly, current usability requirements must be fairly non-specific to the point of not being helpful, as the authors, who work at the leading edge of clinical systems usability evaluation, so clearly point out.

Barriers to comparing the usability of products post-implementation
Comparing the usability of EHR products as they are actually used by front-line clinicians would provide insight on which products are able to best support the needs of clinicians in context. Currently, there are survey-based comparisons that rely on clinician feedback or information technology leaders’ perceptions of the usability of implemented EHRs; however, there are no objective test-based assessments of implemented EHRs. While survey-based comparisons provide some insight, these methods are not formal assessments and often under-represent actual usability challenges.

Because EHRs often go through an intense phase of customization and configuration to integrate with other clinical systems during implementation, the same EHR product used at different provider sites is often dramatically different.

Since EHR systems are built to be one-size-fits-all (see A Usability Conundrum: Whether it is EHRs or Hospital Gowns, One Size Never Fits All…) improving their support for clinical work requires some level of customization.   Of course, customization of the same product at different sites will yield usability differences, so we are back to square one regarding comparisons.

Let’s step back a moment and consider the actual problems we are trying to solve— enhancing care quality (while improving clinician productivity, efficiency) and supporting clinical decision-making. Stated another way, we are trying to support clinical work by having HIT that fits into the clinical workflows of clinicians. Clinical workflow support is the fundamental issue, and EHR systems are designed to be clinical data repositories, not clinical work assistants (see Is the Electronic Health Record Defunct?).

EHR systems are designed to provide access to functionality based on a pre-determined menu structure, not according to what a specific clinician is doing at a particular moment in the care process. Clinician-EHR impedance is not a usability problem–it is a design mismatch.   While UCD and usability testing have roles in improving any software system, they cannot gently massage EHR systems into doing what they were never designed to do.

Aside from the woes that arise from forcing clinicians to do real-time mapping between the clinical work they are doing and EHR navigation trees, there is the problem of the user interface.   Just as clinicians vary by workflow support needs, they also vary by user interface needs. In 2016, there is no technical reason that EHR systems cannot support user preferences and allow each user to create a profile that stores preferred window layouts, fonts, colors, data fields, etc. Why go through endless rounds of UCD when users could simply set preferences for many UI elements?

As with navigation trees, current usability efforts recast a fundamental product design flaw—lack of user configuration capability—into usability issues.   User interfaces of current EHR products are simply too tightly-coupled to the underlying application logic and data.   As a result, user-specific adjustments are impossible, and the customizations that are possible, are time-consuming and costly. Better UCD processes and more usability evaluations will not morph designs from the 1990s into modern, clinician-friendly systems.

EHR systems that were conceived as replacements for paper charts have been pushed into duties they were never designed to perform. Clinical workflows are comprised of steps, use and produce data, and require interactions (people-people and people-things).   Any system that purports to help clinicians must be designed with clinical workflow as a central organizing principle. Further, the user interfaces of clinical care systems must be adjustable by the users. Addressing these design issues will go a long way toward solving what are now called usability problems. Failure to do so will result in…well, what we have now.

Let’s fix the actual problems—antiquated designs and lack of formal standards (representation, terminology, concepts) for clinical workflows—then we can discuss standards for usability testing and UCD approaches.

  1. Ratwani RM, Hettinger AZ, Fairbanks RJ. Barriers to comparing the usability of electronic health records. J Am Med Inform Assoc. 2016 Aug 29. [E]
Facebooktwitterpinterestlinkedinmail

Leave a Comment

{ 5 comments… read them below or add one }

Bobby Gladd November 2, 2016 at 4:45 PM

My current read, “The Distracted Mind.”

https://twitter.com/BobbyGvegas/status/793914629210574849

Implications for UX and WKFL

Reply

Jerome Carter November 2, 2016 at 6:27 PM

Sounds interesting, will add it to my list.

Reply

jim November 1, 2016 at 12:12 PM

it’s so good to read your thoughts, i’ve not been reading them regularly for a while. stay strong sir, at the very least the machine learning AI of the future will put your thoughts to good use. 🙂

Reply

Jerome Carter November 1, 2016 at 1:32 PM

Thanks for stopping by!

Reply

Chuck Webster, MD, MSIE, MSIS @wareflo October 31, 2016 at 8:38 AM

RE cognition, usability, workflow, and workflow technology, here is what I wrote in 1991:

“The choice of touch screen technology and large icons deserves some comment. A major motivation for using a structured data entry approach is not just to obtain structured data but also to increase the speed of data entry. Fitts’s Law [5] is a mathematical model of time to hit a target. It basically says that larger targets are easier, and faster, to hit. Fitts’s Law seems obvious, but it is often ignored when designing electronic patient record screens because the larger the average icon, button, scrollbar, etc., the fewer such objects can be placed on a single screen. A natural inclination is to display as much information as possible; EPR screens are thus often crowded with hard-to-hit targets, slowing the user rates of data entry and increasing associated error rates.

Fitts’s Law, in conjunction with constrained screen “real estate,” suggests use of a few, large user-selectable targets. Displaying fewer rather than many selectable items tends to increase the number of navigational steps, unless some approach is used–such as a workflow system–that automatically and intelligently presents only the right structured data entry screens.

In our opinion, the combination of structured data entry, workflow automation, and screens designed for touch screen interaction optimally reduces inherent tradeoffs between information utility and system usability on one hand, and speed and accuracy of data entry on the other. Successful application of touch screen technology requires that only a few, but necessary, selectable items be presented to the user in each screen. Moreover, workflow, by reducing cognitive work of navigating a complex system, makes such structured data entry more usable.”

http://chuckwebster.com/2009/07/ehr-workflow/cognitive-psychology-of-pediatric-emr-usability-and-workflow#1991

Reply

Previous post:

Next post: