Usability is big these days. The growing number of publications, scholarly and otherwise, is both impressive and very much welcomed. Having read quite a few articles on the topic, I always wrestle with one issue—how to summarize all the information in a way that makes it directly useful for software design.
There is a huge gap between knowing user preferences or observing workflow disruptions and creating software that addresses such concerns. One problem I have with many usability articles is that they are too tightly-bound to the systems analyzed. In other words, when system X is analyzed for usability issues, the issues uncovered arise from the specific architecture and coding choices of that system. However, usability studies rarely translate findings back into specific internal design choices. Obviously, once software exists, it can be improved, but I am more interested in how one makes good software from the start.
There are so many “big” clinical software design questions waiting for answers. For example: Is there an ideal architecture for clinical software? What are the best object models for clinical systems? How should user-interfaces for primary care physicians differ from, say, cardiologists? What algorithms should exist in nursing systems? There is a growing consensus that role-based interfaces/interactions should be supported; the challenge is translating this precept into actual software. Answering these questions, and designing next-generation clinical systems requires more than evaluating current systems. Optimizing horses and buggies for riders is not the same as inventing an automobile. Evaluations yield design principles, so how do we move from principles to working code and, in the process, make clinical software design more an engineering exercise and less a creative exploration?
Making use of usability data
Good research is being done on EHR usability (the work at the National Center for Cognitive Informatics and Decision Making in Health Care, UT Houston is a prime example). However, since usability testing necessarily requires something to test, current usability testing focuses on current products. Such tests will tell us how to make the EHR systems tested better, but what if the EHR metaphor is itself the underlying problem? How can usability testing tell us whether or not a patient data repository is the ideal design choice for supporting clinical work? Please note that I am not implying that usability evaluations are not helpful, only that they provide design principles, and not the ultimate answers to the big questions about clinical software design.
Wading through the EHR usability literature can be daunting. Researchers use different terms, different questions/surveys, and select different focus areas. Comparing research outcomes is not usually a straightforward process. However, a recent article by Zahabi, Kaber, and Swangnetr (1) has made comprehending EHR usability literature somewhat easier for those interested in creating software. The authors reviewed research studies that targeted EHR user interface problems and safety issues with an intent to provide design guidance.
In conducting their review, the authors make this statement that I can surely identify with.
Usability is a general term concerning the effectiveness, efficiency, and satisfaction with which users achieve goals with an interface (International Organization for Standardization, 1998). There are many principles of usability identified in the literature, including interface learnability, flexibility, robustness in functionality, capability for error recovery, and so on.
The authors rely mainly on ISO and Molich and Nielsen (2) for their usability definition.
Our review of EMR and EHR usability studies revealed nine major types of problems. All of these problems except lack of customization represent violations of the usability principles Molich and Nielsen (1990) developed as a part of their heuristic analysis methodology for usability evaluation of interactive systems. These usability principles include simple and natural dialogue, speaking the user’s language, minimization of user’s memory load, consistency in design, providing feedback, providing clearly marked exits, providing shortcuts, providing good error messages, and error prevention.
The TURF group at UT Houston uses an amended version (3) of the ISO’s definition.
Under TURF, usability refers to how useful, usable and satisfying a system is for its intended users to accomplish goals in a work domain by performing certain sequences of tasks. Useful, usable, and satisfying are the three major dimensions of usability under TURF. TURF’s usability definition is based on the ISO definition (ISO 9241-11), but differs in significant ways. ISO defines usability as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.” Under ISO’s definition, effectiveness refers to the accuracy and completeness with which users achieve specified goals. Efficiency refers to the resources expended in relation to the accuracy and completeness with which users achieve goals, and satisfaction refers to comfort and acceptability of use. TURF and ISO definitions of usability differ with “effective” in ISO and “useful” in TURF, and “efficient” in ISO and “usable” in TURF.
Under TURF, “useful” refers to how well a system supports the work domain where users accomplish goals for their work independent of how the system is implemented. A system is fully useful if it includes domain, and only domain, functions essential for the work, independent of implementations. Full usefulness is an ideal situation; it is rarely achieved in real systems. Usefulness also changes with the change of the work domain, with development of new knowledge, and with availability of innovations in technology.
Terms like “satisfaction and “useful” are subjective, and this subjectivity is evident in research reports.
The Zahabi paper is extensive (about 30 pages), and the authors make it digestible by providing separate sections for each problem area followed by a summary. The problem areas identified by the authors are listed in Table 1.
Table 1. Problem areas
For my design explorations, I grouped the authors’ problem areas according to how I would approach addressing the problems identified. For example, some problems are solvable at the user-interface level (e.g., efficient interaction, forgiveness and feedback, effective use of language), while others require deep incursions into architecture and code (naturalness, preventing errors, minimizing cognitive load, customizability/flexibility). Due to the depth of the paper, I will limit my comments to the latter group.
Summary. All of the preceding reviewed studies focusing on violations or naturalness included the recommendation that EMR interfaces be designed to follow the natural workflow of health care systems. The primary objective of these recommendations is to reduce task interruptions for physicians and other HCWs.
Delving further into the authors’ comments, one finds that naturalness is directly linked to workflow issues. Intuitive can be taken to mean, “what was expected.” As opposed to issues such as colors, font sizes, and screen placements, fixing workflow issues requires technology that most EHR systems lack. Moreover, unless we expect every practice site to solve every workflow issue on its on, we need models of clinical work that can be used as starting points for customization.
Summary. Our review showed that one of the main issues that can lead to an increase in the number of errors in EMR use is C/P functions supported through interfaces. Some of the recommendations to address this problem include using structured C/P functions, color-coding, or removing C/P functions from data entry and documentation processes. In addition, data entry errors can be prevented by automatic data entry, adding verification dialogs or confirmation windows, and adding patient identifiers as watermarks in displays.
Cut and paste functionality is noted as a major design problem; however, cut/paste exists for a reason. The design question is: What is the underlying problem that cut/paste is intended to solve that is best solved by a different means? This is a clear example of how knowing a problem exists does not tell one how to fix it.
Minimizing cognitive load
Summary. User information overload is one of the most common usability problems identified in the literature. Some of the main recommendations for EMR design to address this problem include presenting only the most important information for concurrent tasks, reducing the number of screens with similar information, and using the proximity-compatibility principle for display layout in EMR screens.
Addressing cognitive load issues cannot be done without a very clear idea of what the user is or should be trying to accomplish. Data-centric designs obscure task-centric information needs. Lacking detailed task models, the only approach to fixing cognitive overload is by trial and error — build a system, test, refine, test, etc. When workflows are embedded directly in programming code, these cycles can be very costly, and still never yield the desired software behaviors.
Summary. Our review of literature revealed a lack of customizability to be a critical issue in EHR/EMR/EPR design for HCWs. Study results reveal that changing information content based on a clinic’s needs, flexible templates, shortcuts, and multiple views based on a user’s role can all be effectively used to address lack-of-customization problems.
Lack of customizability/flexibility is a critical issue? I am shocked, shocked, I say!!! The ability to customize a system from a configuration panel instead of directly in code requires specific software architectural features, loose coupling/high cohesion being at the top of the list. Unfortunately, depending on the product, retrofitting it to allow for codeless configuration could be as costly as designing from scratch. The same could be said for any major system component. For example, it would take a significant rewrite to decouple a user interface that is tightly-coupled to a data store.
Getting from here to there…
It is becoming increasingly obvious that data-centric designs are an important source of usability problems. Electronic charts, as currently designed, increase cognitive load because they are designed as clinical data stores that are accessed according to traditional chart divisions (labs, radiology, notes, medications, etc.) rather than by the tasks at hand. Think about it—it makes perfect sense that systems designed with data as “king” and user needs as secondary concerns should present usability problems. Information displays that provide all available data to users almost necessarily have crowded, hard-to-read screens. Presenting information that is contextual to the task at hand requires both control-flow and data usage representations of that task.
Clinical care involves numerous mental tasks, and every time the train of thought changes, a disruption occurs. Context switching increases with poor navigation trees or when one has to stop and poke around for information. Eliminating usability problems has to start with task-centric designs.
Personally, I have settled on using the wealth of usability studies as Tycho Brahe-like data that, combined with workflow research, can be used to construct clinical work models. Regardless of the usability definition one prefers, one thing is invariant—the goal is helping users get things done. While access to data is absolutely essential for health care, tasks rule…
- Zahabi M, Kaber DB, Swangnetr M. Usability and Safety in Electronic Medical Records Interface Design: A Review of Recent Literature and Guideline Formulation. Hum Factors. 2015 Mar 23. [E].
- Molich, R., and Nielsen, J.. Improving a human-computer dialogue. Communications of the ACM 33 , 3 1990, 338-348.
- Zhang J, Muhammad W eds. Better EHR: Usability, Workflow and Cognitive Support in Electronic Health Records. National Center for Cognitive Informatics & Decision Making in Healthcare, 2014.