Using EHR-related Research to Guide EHR Design

by Jerome Carter on March 24, 2014 · 0 comments

It is difficult to find research-quality information that can aid in EHR design.   Too often, studies that discuss the effects of EHR systems on specific outcomes treat the EHR system as a monolithic entity…as a black box, really.   This view of EHR systems, while convenient from a research perspective, does little to help those of us interested in designing systems.

EHR systems consist of many components—user interface, data store, algorithms, security protocols, and semantic controls (terminologies, ontologies, etc.).  Each of these components affects how well the system supports clinical care.   Recently, the absence of explicit support for processes has become more widely discussed.   Improving system design requires the availability of hard data that indicate how specific components either help or hinder expected uses or outcomes.

Another related problem that arises when trying to interpret EHR-related research is that of terminology.   Since there is no well-defined set of terms used by all informatics researchers for EHR problems, features, or components, even the studies that exist are often difficult to compare.   Thankfully, this situation is improving.   In the past, I have written about the work of Smith and Koppel (1) and their efforts to tie mental models to specific EHR design choices; Freidman et al. and Flanagan et al.  and their respective analyses of workarounds (2,3); and  finally Weiskopf and Weng (4) and their excellent analysis of data quality concerns. To this list, I now add the work of Hypponen et al. and their efforts to see how choices related to structuring EHR patient data affect specific outcomes.

In their paper, Impacts of Structuring the Electronic Health Record: A Systematic Review Protocol and Results of Previous Reviews, Hypponen et al. conduct a systematic review and provide an analytic framework for comparing studies that address EHR data content and structure.   In the introduction, they explain why this topic is important.

In many eHealth implementation strategies, the importance of defining standard structures for core patient information is crucial. Structuring patient data is perceived to support clinical care processes, facilitate new technologies for increasing patient safety and care quality, enable quality monitoring of the health service processes and evidence-based management locally, regionally and nationally by enhancing collection of statistical information. It is also assumed to enable easier participation of citizens in their care process. Evidence to support these assumptions is, however, yet scarce while the balance between risks and benefits of free text vs. structured data in EHR documentation has long been identified as a fragile one.

The last sentence makes an interesting claim. That is, that little evidence exists to support the notion that an electronic record, properly structured (whatever that actually means), leads to better care.

In conducting the review, the authors searched the literature from 1975 until 2011. They found seven reviews that met their criteria and which, importantly, provided information on EHR structure as it related to a specific outcome.  Here are their findings.

Compared to the expected outcomes of structuring patient data, there was evidence of improved information quality in the Input-category, but no evidence that this would support clinicians’ care processes. Impacts on actors were scarce (e.g. on user skills related to the implemented new method of structuring patient data, usability or usefulness of the structured data). There was evidence to support the administrative viewpoint of increasing adherence to documentation and care guidelines. In the Output-and Outcome-categories, impacts focused on productivity and secondary use of structured data (for automatic monitoring of care guideline compliance). There was little or no evidence found of expected benefits of structuring for “patient safety,” “care quality” or “easier participation of citizens in their care process”.

It makes sense that using codes, templates, and/or forms, and the like, would improve data collection, so that outcome is expected.  However, how should one interpret the lack of evidence for care quality or patient safety?

One place to look would be at the quality of the data.   Weiskopf and Weng’s paper, Methods and Dimensions of Electronic Health Record Data Quality Assessment: Enabling Reuse for Clinical Research, appeared after 2011, so it was not included in the analysis.  However, that paper provides one potential explanation for the lack of impact.   Here is that paper’s conclusion:

There is currently little consistency or potential generalizability in the methods used to assess EHR data quality. If the reuse of EHR data for clinical research is to become accepted, researchers should adopt validated, systematic methods of EHR data quality assessment.

Having more data is not necessarily the same as having good data.  Moreover, there is not a formal definition of “good” dataClinical informatics lacks objective criteria and formal definitions for seemingly every major aspect of clinical systems.   Without such criteria, it is difficult, and possibly even unwise, to attempt to compare various studies.  Thus, informatics research, especially related to EHR design, is in a quandary.  More research is needed to answer key questions; yet, there are no criteria for defining key aspects of the objects of study (see The EHR as an Object Worthy of Study).

Currently, there are no formal objective criteria or definitions for clinical workflow, clinical data quality, EHR usability, or database schema.  Existing informatics standards were developed by consensus, which is not the same as a rigorous scientific method.  As a result, we find ourselves in a situation where there is a lot of research, but without theories to guide investigators or any well-defined means to compare research  findings.   So what should be done to improve EHR systems?  Unfortunately, it depends, quite literally, on who you ask or the studies you read.

EHR design issues aside, there is one nagging question that begs for an answer: What is the relationship between data quality/quantity and care outcomes?   The ongoing assumption is that if good data are available, then care will improve. However, this viewpoint leaves human behavior out of the equation.  Why do research findings take years to become part of routine clinical practice?  Why is it that giving people good information alone is not a guarantor of behavioral change?  Certainly, bad or missing data can be harmful to patients, but that doesn’t necessarily mean that good data will significantly improve any care-related outcome.

Hypponen and colleagues end their paper with the following conclusion:

Diverse foci on various EHR contents to be structured, structuring methods and impact measures induce difficulties in grouping and summarizing the results of previous reviews. The positive outcomes of different structuring methods seem to cluster on information quality and process quality from the administrative viewpoint, but not necessarily leading to better patient outcomes. A more systematic reporting of the review protocols as well as of the variety of benefits connected to the diverse ways of structuring patient data would contribute to a coherent evidence base for decision making.

I, for one, look forward to the day when clinical informatics has formal theories and methods.     However, for the time being, I have to agree with this conclusion and say that it would be really nice if everyone used the same criteria and definitions when reporting findings. That seems reasonable and doable to me.

  1. Smith SW, Koppel R. Healthcare information technology’s relativity problems: a typology of how patients’ physical reality, clinicians’ mental models, and healthcare information technology differ. J Am Med Inform Assoc. 2014 Jan 1;21(1):117-31.
  2. Friedman A, Crosson JC, Howard J, Clark EC, Pellerano M, Karsh BT, Crabtree B, Jaén CR, Cohen DJ. A typology of electronic health record workarounds in small-to-medium size primary care practices. J Am Med Inform Assoc. 2013 Jul 31.[E]
  3. Flanagan ME, Saleem JJ, Millitello LG, Russ AL, Doebbeling BN. Paper- and computer-based workarounds to electronic health record use at three benchmark institutions. J Am Med Inform Assoc. 2013 Mar 14. [E]
  4. Weiskopf NG, Weng C. Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research. J Am Med Inform Assoc. 2013 Jan 1;20(1):144-51.
Facebooktwitterpinterestlinkedinmail

Leave a Comment

{ 0 comments… add one now }

Previous post:

Next post: