As always, I am looking forward to the slow, languid days of summer when things quiet down and the mind can wonder (wandering is fine too). Typically, I have a book list to get through. A few summers ago, math was the focus. Once it was software architecture; however, this summer there are no books, only questions.
Quite often, I write about the importance of clinical processes, and now, having been able to experiment with BPM suites, I find myself imagining a process-oriented user interface. The usual user interface elements of EHR systems–problems, labs, medications, notes—are borrowed directly from paper charts. Clinicians are given a virtual paper chart, and navigation is directed at moving through chart sections. Here is a question: To what degree are clinicians’ usability complaints a consequence of having to go on patient data safaris to support their clinical processes? Next question: How would a process-oriented UI look? Would it offer better usability?
Chart-based interfaces were easy to adopt because the paper chart offered a familiar metaphor. Processes have no equivalent. The paper chart was in use for decades before being adapted for electronic use. Processes and workflows, on the other hand, have been serious topics of conversation in health care for only the last 10 years and mostly since HITECH. Now that processes and workflows are recognized as important determinants of safety, quality, and outcomes, one is still hard pressed to find a “standard” list of clinical processes that could be used to assess EHR systems.
Commonly, discussions of clinical processes focus on the patient visit. In fact, the patient visit – how clinicians use EHRs when interacting with patients–seems to be the main focus of many studies of EHR usability. However, many care-related processes occur when no patient is present; for instance, interacting with colleagues or reviewing results. Usability studies should look at the entire spectrum of clinical processes, not just direct encounters.
Here is another design issue that has been bugging me: Should EHRs have patient models that provide representations of patient state? Since current EHR systems were designed to replace paper charts, they focus on data. User interactions focus on submitting and retrieving data. There is no holistic view of the patient in the system, only date and time-stamped data. What if one could interact with a model of the patient instead of data clumps? I am not proposing some AI, virtual patient who speaks to us from the EHR, but rather an in-memory patient object that could be used for decision support or advanced queries. From a design standpoint, I think the value of patient models would lie primarily in forcing developers to think about how clinicians interact with patients while creating the guts of EHR systems instead of thinking mostly about how clinicians interact with data.
Data exchange and semantic interoperability, where are we going with this? Data exchange has always made sense to me; its value is obvious. Getting something from point A to point B is the goal of data exchange. Of course, there are many levels. One might receive a referral report or lab results medievally (on paper, by post), via Fax, or by email. In each case, an immutable document appears that I, as a clinician, could understand. In moving to electronic documents, one could use checksums to assure that the contents were the same on both ends. Once an additional goal is added (incorporating the document as structured data in an EHR), exchange becomes more complex. Content has not changed, but handling is more complex. Now, each document must have an agreed-on structure with standard fields, names, types, and element order. The value here is that all those documents now appear in the EHR and can be read and searched like other data, but this is still data exchange. Here is the definition of semantic interoperability from Dolin and Alschuler:
Definitions for ‘semantic interoperability’ abound. The Joint Initiative for Global Standards Harmonization (http://www.skmtglossary.org) defines semantic interoperability as the ‘ability for data shared by systems to be understood at the level of fully defined domain concepts.’ Wikipedia’s definition (http://en.wikipedia.org/wiki/Semantic_interoperability) is ‘the ability of two or more computer systems to exchange information and have the meaning of that information automatically interpreted by the receiving system accurately enough to produce useful results, as defined by the end users of both systems.’
The 13606 standard offers the following:
Beyond the ability of two or more computer systems to exchange information, semantic interoperability is the ability to automatically interpret the information exchanged meaningfully and accurately in order to produce useful results as defined by the end users of both systems. To achieve semantic interoperability, both sides must defer to a common information exchange reference model. The content of the information exchange requests are unambiguously defined: what is sent is the same as what is understood.
In the field of health information, to achieve semantic interoperability is even a more important and difficult duty. The complexity of the health domain, its frequent variation and evolution and the differences between the information technologies domain and the health domain need a deep change on the methodologies of information management.
With semantic interoperability, it’s not just clinicians who need to understand shared data; computers must understand it as well. What does “understand” mean in practice? Aside from filing a lab result correctly in the EHR, what should the EHR understand about the value? What exactly is the, “ ‘ability for data shared by systems to be understood at the level of fully defined domain concepts’ ”? Here is my question: Is it really semantic interoperability with all its complexity that we need in order for clinicians to make fuller use of electronic systems, or is it more rigorous rules for data exchange?
Is it time for smart problem lists? Look at the problem list in an EHR and it might contain anything from plain text to SNOMED codes. Case-finding algorithms are widely-used; why are they not built into EHRs? It seems like a short jump from case-finding algorithms to smart problems. Design-wise, smart problems would require an API for problem lists. Using an API, one could add/access new algorithms without requiring additional programming. Another possibility is having algorithms that verify or assemble evidence for problems entered. I don’t see a safety issue if the results are presented to clinicians for diagnostic verification.
My design musings will be accompanied by tinkering. Now that Apple is providing beta-testing capabilities, I expect to make use of them. On the workflow front, I am creating a new workflow technology tutorial. Once that application is ready, I want to test it. It will be hosted on a cloud account, and I will need a few testers. This summer promises to be a good one, with plenty to keep me occupied. So many questions, so little time… See you in two weeks.
A little summer music…