User-Centered Design—It’s Complicated…

by Jerome Carter on June 20, 2016 · 0 comments

Good user interfaces are hard to create, and unfortunately, the UI often receives less design attention that other aspects of a software system.   Personally, I see this as a developer fatigue issue. After the wrangling with objects, algorithms, APIs, security, and such, it is tempting to slap on a UI that simply provides access to underlying functionality.   If users have no choice, they will use systems with poor interfaces. Naturally, wise vendors who want to have successful products invest in UI design. However, UI design, even when done by professionals, has its pitfalls.

I belong to four computer-related professional organizations, and since this is 2016, each allows members to renew online. I wish they would let Amazon help with this.   I have been writing software and using computers since mainframes and punch cards, so I have seen my share of computer screens. One of these groups has a simple renewal process–log in, go to the member section, select renewal, add on any donations, and checkout.   The process is clear and obvious.   The other three I stumble through every time. One of them presents my past renewal history. Why? I have no idea, but I spend time looking at the history just to make sure that I don’t need to look at it. Another reacts so slowly that I find myself checking to make sure the computer hasn’t hung.   And, even worse, after filling basic information, it makes me go to a different site to complete the renewal (it was so bad that I actually lodged a complaint last year).   These sites are attempting to automate a single process—membership renewal–and each could use a few usability improvements.

Typical UCD practices can optimize designs for simple processes fairly easily because at any step the options are constrained and the decisions few. However, when complex processes are involved, UCD may yield uneven results, which brings me to today’s article of interest.   Hultman, et al., applied UCD in an attempt to improve the usability of an EHR navigator (1).   They did everything by the book—cross section of users, pre and post analyses, human factors methods, and usability professionals. Even so, the net outcome was zero improvement in usability.

The research group personnel and their roles are described as follows:

The design of the new navigator was an iterative process in which clinicians, that were current users of the old navigator from primary care and specialty clinics, were directly involved in providing design guidance and testing of the navigator. Between four and six physicians representing different specialties, along with an informatician with a background in human-computer interaction, were involved in the design process over a series of four sessions. The sessions focused upon identifying key tasks by role and those tasks necessary for core ambulatory functionality. In addition to the initial group sessions, clinicians and clinic staff met with developers several times over a period of weeks, testing iterations of the modified navigator and identifying additional design issues including content, ordering of content, and appearance of the content. Input was also solicited throughout the process from nurse assistants and nurse managers at several different clinics of different specialties. A group of three medical informatics directors made final design decisions when requests were in conflict with one another.

This appears to be a well-constituted group with an appropriate mixture of clinicians, informaticists, and human factors experts.

The research team created five patient cases that participants had to complete. Cases constrain user interactions with the system by making sure that all participants are trying to solve the same problem. Think aloud methods were used to monitor participants as they completed the cases. Below is the authors’ description of how cases were conducted:

We designed the tasks for the patient cases based on what would typically be completed by a clinician during tasks were also informed by previous work at the University of Texas Health Science Center at Houston [10]. Example tasks included: enter a chief complaint, review current medications, and determine if medications have any interactions. The tasks were pilot-tested with two resident participants, and each patient case took approximately 10 minutes to complete. Participants were asked to verbalize their thoughts while completing the patient cases using a think-aloud protocol. The facilitator provided clarification about the tasks within each case but did not provide any insight into how to perform tasks or how well participants were performing the tasks.

On analyzing the results, the authors were surprised to find users were actually less efficient when using the newly-designed navigator.

Somewhat surprisingly, average time to complete five of the six patient cases was longer in the new navigator. All participants completed the patient case ‘Maggie’ in both navigators, and it also took participants longer to complete this patient case in the new navigator despite random ordering for the two navigators for this case. Several individual tasks also took longer in the new navigator. Scores on the SUS suggested that participants had mixed preferences between the two navigators, with a slight not statistically significant overall preference for the new navigator.

The discussion of the unexpected findings hinges on issues of how to best organize navigator options.

Despite the time and effort that was put into designing the new navigator, the new navigator did not have a strong impact on user performance. There are several possible explanations for this: it is possible that the two tier structure of the navigator introduced confusion and it is possible that despite efforts to reduce the number of options available, the available lists were still too long. Future research should explore these issues further.

Individual preferences for the high-level navigation structures varied greatly. There is currently a tension between EHR standardization and customization. It is unknown if there is an optimal way to complete tasks, however current workflows within EHRs often do not match preferred workflows. It is possible that greater customization could help each user use the navigation patterns that are best for them. However, this would make sharing knowledge between users difficult. It is possible that giving users a standard navigation pathway, even if it is not optimal would reduce confusion and improve usability.

Having conducted an excellent usability study, the authors undermine their attempts at possible solutions by accepting as a given that all users must use the same or close to the same interface—that is, one-size-fits-all. Why is this mode of thinking so prevalent in UCD/usability research? Why is one-size-fits-all even considered realistic for a complex system?   Why not allow users to configure the UI to their needs?

As the authors note, participants used many different paths to solve tasks. This should not be a surprise as users are rarely, if ever, homogenous.   In clinical settings, there are many different types of clinicians with different levels of computer literacy. No two clinicians, even in the same areas of expertise, can be expected to conceptualize and solve the same problem in the same way.   Further, the more complex the task and the more ways to complete it, the more likely personal preferences are to develop.

When it comes to complex systems such as EHRs that are expected to intimately support user work habits, there are really only two paths to supporting users: force them to change their work habits and adapt to the system or have the system adapt to them.   Current EHR systems and UCD/usability efforts operate under the assumption of option one, but why? There is no technical barrier preventing user-configurable interfaces. Why not create user interfaces that users can adjust to fit their way of doing things?

Hultman and colleagues spent a great deal of time and effort trying to determine the optimal UI and navigation path for all users for all potential uses of an EHR. Considering the number of possible permutations, they could spend many more hours testing and reviewing feedback before settling on a “universal” design.   On the other hand, creating a user interface that allows users to set their own UI options is doable and would remove the need for so many hours of testing. As long as EHR user interfaces are static, there will be UI-dependent usability issues because, just as no family has 2.3 children, there is no such thing as an “average” EHR user.

Hultman and et al. have made a much-needed contribution to EHR usability research. If nothing else, I hope it helps us to start asking the right questions. Instead of: What is the best UI for the average clinician? Let’s move to: How do we best design EHR UI configuration panels?   At present, EHR users are suffering through the computer equivalent of Henry Ford’s century-old car color option: “Any customer can have a car painted any color that he wants so long as it is black.” It’s 2016…

  1. Hultman G, Marquard J, Arsoniadis E, Mink P, Rizvi R, Ramer T, Khairat S, Fickau K, Melton GB. Usability testing of two ambulatory EHR navigators. Appl Clin Inform 2016; 7: 502–515

 

Facebooktwitterpinterestlinkedinmail

Leave a Comment

{ 0 comments… add one now }

Previous post:

Next post: