An Interesting Approach to EHR Selection and Usability Testing

by Jerome Carter on October 1, 2012 · 2 comments

Recently, while reviewing visits to EHR Science, I noticed that the number of people accessing the Usability resources page has jumped.   Now that EHR adoption is moving along, usability has gained a much higher profile, both from the ONC as a certification requirement (EHR Certification 2014—Darwinian Implications?) and from users interested in buying systems that they can live with.  NIST and AHRQ have created excellent resources that should prove to be quite useful for system designers (NIST Usability Resources—A Goldmine for Developers) as well as EHR shoppers (Usability as an EHR Selection Tool).

EHR selection is more likely to end in success if potential buyers have an opportunity to interact with candidate products in a meaningful way (no pun intended, but I’ll take it if it works) prior to signing a contract.  Test scripts are a good way to ensure that:  1) product interactions are consistent among all those conducting evaluations; 2) evaluations review all key products features; and 3) all products are evaluated on the same basis.  Test scripts help to ensure that evaluations are apples-to-apples.

Moving to the other end of the EHR market, vendors have a keen interest in how their products are perceived by users, and how they perform under real-world conditions.   Test scripts can be used by vendors during usability testing in the same manner as they are by EHR buyers who are evaluating products.   Obviously, I am a fan of test scripts, so I was pleasantly surprised to come across a paper in which test scripts and standardized patients are combined for usability testing—a good idea made even better!

In their paper, Decision Support for Acute Problems: The Role of the Standardized Patient in Usability Testing, Linder et al. discuss using standardized patients to aid in the assessment of a smart form-based decision support application.  They describe the test methodology as follows:

A group of eight test participants—all physicians in the Partners provider network—worked with a prototype version of the ARI Smart Form [16]. The test participants had no prior exposure to the Smart Form prototype and were given only a basic introduction to the application to obtain their initial reactions to the user interface and enhance the quality of their feedback.

Test participants were given a set of three scenarios that involved using the ARI Smart Form prototype to perform specific tasks using hypothetical patient data. The three scenarios corresponded to (1) a 40-year-old with acute cough/acute bronchitis; (2) a 29-year-old preschool teacher with streptococcal sore throat; and (3) a 39-year-old with hypertension and hypercholesterolemia with a non-specific upper respiratory tract infection. The first two test scenarios were presented in written form (the first scenario was six sentences and the second scenario was nine sentences). A standardized patient was used for the last scenario without written stimulus. We matter-of-factly introduced the standardized patient to the test subjects and did nothing to suggest the use of standardized patients was atypical.

Standardized patient training was straightforward.

The standardized patient was trained with one of the investigators in an approximately 45 min session. The standardized patient was educated about the classical findings of non-specific upper respiratory infections (e.g., runny nose, sore throat, cough, but no signs of more serious illness like fever, chills, or vomiting) and was given a basic “script” from which to work (the script was nine sentences long). The standardized patient was instructed to try to directly answer questions, providing conversational answers, and succinctly improvise if the test participant asked questions that were not included in the script. The investigator and the standardized patient reviewed the scenario three times to practice and ensure consistency.

Due to the small number of subjects, the authors did not attempt a formal evaluation of the effects of the standardized patients on the study subjects or usability testing outcomes (beyond stating that study subjects reacted positively to the presence of the standardized patients).   As for the usability evaluation, study participants related a number of issues they encountered while using the application—exactly what testing is designed to accomplish.

From my perspective, the value of this paper does not arise from the usability findings. After all, the authors found what just about every developer finds on allowing users to evaluate newly-minted software—that it was not as wonderful as they believed.  Rather, it is the idea of augmenting test scripts with standardized patients (an action that could make EHR selection more effective) that makes this paper worth reading.  However, I really think test scripts with greater detail than those described by the authors are required for EHR selection.

Finally, the authors offer four scenarios in which the use of standardized patients might prove helpful during usability testing…and, I would suggest, for EHR selection as well. They state:

There are certain situations in which standardized patients will be particularly helpful.

First, and most obviously, standardized patients should be considered in situations where the application requires both the patient and clinician. Standardized patients will not be useful for applications that are not dependent on the presence of both the patient, such as a “results manager” [8], or the physician, such as home-based applications [22].

Second, standardized patients can be helpful for those applications for which timing is critical. For ARI visits that are typically very brief, it is important to get test subjects’ perceptions of time as they are talking to the standardized patient.

Third, standardized patients should be considered when the interaction of the clinician and the patient is integral to the application, such as applications that require gathering a history.

Fourth, special attention should be paid to situations where an application has the potential to interfere with the patient–physician interaction. Though computer use can have favorable effects on the patient–physician interaction [23], applications that are hard to use and draw attention away from the patient are likely to be rejected by clinicians.

Usability evaluation and EHR selection can benefit equally from structured testing.  Complementing test scripts with standardized patients could help clinicians with EHR selection by making evaluation conditions closer to actual practice situations.   It might also provide a pre-market edge by helping vendors tweak their systems before users find the warts.  In any case, I think it’s a good idea; should the opportunity arise, I’ll give it a shot.


Leave a Comment

{ 2 comments… read them below or add one }

Bennett January 19, 2014 at 6:41 PM

EHR Usability testing is now required in order to satisfy the Safety-enhanced Design portion of the Meaningful Use Stage criteria.

EHR vendors are conducting and reporting summative evaluations and including them in their submission to an ONC-ATCB (e.g. Infogard, Drummond, CCHIT). These evaluations are required to follow a specified test procedure (For more information on the ONC Test Procedure for §170.314(g)(3) Safety-enhanced design see ).

Results from the summative study should be created and presented to the ONC using the Customized Common Industry Format (CCIF) Template for EHR Usability Testing (NISTIR 7742 see )These reports typically include an Executive Summary, an introduction, a method section, discussion of the results and any appendices?.

Our blog site has detailed information and the latest news related to Usability in Healthcare. The Future of Medicine is Easy to Use.


Jerome Carter January 19, 2014 at 8:15 PM

Thanks for your comment. I look forward to reviewing your site.



Previous post:

Next post: