When creating the EHR at UAB, I spent months working on the data model. Much of that effort went into making sure that the data captured would be suitable for outcomes research. Of course, the data model can only do so much to ensure data quality–what users choose to enter also plays a role. Anyone who has dealt with clinical databases for a while knows that errors of both omission and commission affect data reliability. This is no less true for EHRs, and recent studies published in JAMIA indicate data accuracy is an ongoing problem.
In their paper, Improving Completeness of Electronic Problem Lists Through Clinical Decision Support: A Randomized, Controlled Trial, Wright et al. studied the completeness of problem lists before and after an intervention designed to alert clinicians to instances where patients met criteria for a problem/diagnosis that was not documented in the problem list. The authors note:
The rate of notation of study problems increased dramatically during the intervention period as a result of this simple alert-based intervention. Overall, study problems were approximately three times more likely to be documented when alerts were shown. This increase is clinically important, since many of these problems are used for quality improvement and CDS.
Significantly, all the information required to confirm the diagnosis was present in the system; however, it simply had not been added to the official problem list.
A second study looked at the accuracy of information in a group of primary care practices (1). The authors found significant variability in how practitioners documented various quality measures. In their discussion the authors state:
EHR offer new potential for performance measurement given that most commercially available EHR use standard dictionaries to capture information in coded forms, such as ICD for problem list and SNOMED for medications. Using these codified data, EHR can help identify patient populations and calculate a significant number of quality measures that leverage data available in the EHR. These measures can range from adherence to clinical guidelines to assessments of rates of clinical preventive services to rates of screening. However, EHR-derived quality measurement has limitations due to several factors, most notably variations in EHR content, structure and data format, as well as local data capture and extraction procedures (emphasis mine).
EHR data reliability is important; the issue is how to achieve maximum accuracy. Both studies imply that better clinician training could help in ameliorating the problem to some extent. However, as the intervention used by Wright et al. demonstrates and comments made by Parsons et al. suggest, EHRs with enhanced designs are essential.
In research databases, data accuracy issues are addressed via data validation and periodic QA audits. The same approaches, if automated, would seem to apply equally well with EHRs. From a design standpoint, this means building systems that can be easily programmed with rules for monitoring data quality. This would move validation from a one-time action that happens during data entry to an on-going monitoring activity that occurs in real-time behind the scenes. A smart problem list would be aware of diagnostic criteria for important problems and automatically provide alerts of their presence. This might improve not only problem list completeness, but also aid in preventing diagnostic errors as well (as the authors note).
Building a smart EHR requires: discrete data, a data model that allows cross-referencing of arbitrary elements, a means of encoding clinical rules, and an engine for executing rules. As EHR adoption increases and EHR data are used to guide policy and clinical care, data accuracy issues will take center stage. It will be interesting to see if this leads to more active research in EHR architecture and design. Certainly, EHR products with features that promote data accuracy and sophisticated reporting capabilities will be more desirable than those lacking such features.
In the meantime, if you are using an EHR to make important decisions, consider implementing an audit program for important data elements. You will learn more about how your EHR functions, and possibly save yourself headaches when reporting data (e.g. MU attestation). If nothing else, the experience will teach you what to look for in your next EHR.
1. Parsons A, McCullough C, Wang J, Shih S. Validity of electronic health record-derived quality measurement for performance monitoring. J Am Med Inform Assoc. 2012 Feb 9. Available at: http://jamia.bmj.com/content/early/2012/02/08/amiajnl-2011-000557.full?sid=40a78a01-8cb8-4381-a900-8c19adf59f4a, Accessed April 5, 2012.