Liberating the EHR User Interface, Part I: The Legacy of LAN-based Client/Server

by Jerome Carter on October 12, 2015 · 2 comments

Everyone has his/her own way of accomplishing a task. Now that I have taken up gardening, I have my own way of mulching, preferred brands, and favorite tools. This is a natural consequence of repeating any task enough times—we all find ways to make things easier for ourselves.    Once habits are formed, we are loath to change them without good reason.   The essential usability question then, is one of how efficient and productive one feels when using a system.

There are formal definitions of usability, but no one need consult a formal definition to determine if he/she is taking longer to do the same task. Here, “longer” could mean many things – more clicks, more screens, more alerts, more time spent trying to decipher a screen or searching for values in a list. No matter, it all comes down to one thing—whether it takes more time to accomplish the same thing.   I have never heard anyone complain about doing less work while accomplishing the same goal. Never.

Some aspects of usability are universal.   Replacing densely packed screens littered with small fonts with more white space and larger fonts works for just about anyone. Likewise, building systems that require the fewest possible number of clicks or screen changes to perform standard tasks also works for everybody.   User-centered design, properly applied, handles the universal aspects of usability quite well.   However, optimizing standard tasks does not necessarily result in an optimal system for each individual. Going the “last mile” in usability will require that users have the ability to adjust the system to their personal work habits – one size does NOT fit all. But what is the best way to create user-configurable systems? Answering this question requires taking a look at deeper architectural issues.

First a mini-history lesson…
Going back 25 years, software was delivered in large, tightly-coupled chunks (Figure 1). MS-DOS was still big, and commercial software products managed their own data storage and provided their own user interfaces.   Every major software vendor had distinctive UIs and unique file formats for their products.

 

Figure1Dos

Figure 1

Move ahead to 2000, and MS Windows had just about eliminated MS-DOS and SQL databases had become widely-available. At this point, vendors wrote the UIs to conform to MS Windows standards, and data storage for many products was handled by a RDBMS.    In the world of clinical software, MS Windows won the day with C++ and Visual Basic becoming the main development tools for Windows.   Windows NT 4.0 was released in 1996, and Visual Basic 6.0 and SQL Server 7.0 in 1998. Together, these three tools paved the way for affordable client/server computing.   EMR vendors quickly adopted them. By 2000, Ms Windows software had become the preferred products for medical practices started buying client/server systems.

Client/server allowed for the decoupling of storage from applications (no one had to write disk access code any longer) and standardized the UI around MS standards. More importantly, client/server development affected the approach to software architecture, and those effects are still with us.

Client/server systems relied on “fat clients.”   The database lived on a server and the application (fat client) lived on the user’s computer.   The easiest way to write a fat client app is also the worst from a design standpoint. VB 6 made it very easy to create forms and connect them directly to the database (Figure 2).

Figure 2 FatClient

Figure 2. FatClient

In such a design, the “Save” button on a screen could have a routine attached to it that did error trapping/validation as well as SQL code to save the screen contents to a database. This is an example of both low cohesion and tight coupling — changing the software is messy because it consists of one big chunk. Change a table in the database and you have to change the code in the UI.   Change the validation rule for phone numbers and one might have to replicate that change in multiple screens. Believe it or not, a lot of legacy C/S code works just like this.

The rise of the Internet was the next big change.   Web applications had to rely on browsers.   Browsers are general purpose, so using a browser as the main UI component required compromises. Browsers were not designed to have the computing power of MS Windows, so the compromise was to send presentation information to the browser and do the computing and data storage on the server. Browsers are thin clients, and they, along with the web, heralded the day of model-view-controller architecture (Figure 3) as a mainstream design approach (MVC was invented much earlier).

MVC1

MVC1

With MVC the presentation information (UI) is handled by the View. The Model manages data access, and the controller manages programming logic (the rules concerning how these three component best interact depends on your religion. Figure 4 is an alternate take on MVC). The upside to this change is a further decoupling of the architectural components of a software system.   Even better, mobile development is MVC-based.

MVC-Process

MVC 2

The greater the degree of decoupling between architectural components, the easier it is to change them independently. Consider how much more difficult it would be to change a flat tire if the hubcap, rim, tire and axle were all welded together. Many current EHR systems were designed to be fat clients, which makes it harder to change software components independently. When UI code is tightly-coupled to other system components, creating user-configurable interfaces is difficult. However, changing from tightly-coupled, low-cohesion software designs to loosely-coupled, high-cohesion designs is not simply a matter of pulling code apart and recompiling.   Significant architectural changes require time, money, and an architect who understands the domain as well as software design. Vendors with successful fat client designs will not want to change architectures without a great reason.

The EHR incentive programs (2009) encouraged the rapid uptake of EHR systems that sported pre-2000 designs just as significant technology changes were occurring — the iPad was released in 2010, cloud technology (which Amazon is credited with starting in mid-2000s) is just now maturing, and NoSQL databases are finally enterprise-ready. ReST has become a standard for data exchange and APIs are common.   Many of today’s EHR systems have UI designs from a different era with different technology constraints. It is time to bring UIs for clinical software into the modern era.    In Part II, I will discuss a few technical aspects of user interface design. See you then…

Facebooktwitterpinterestlinkedinmail

Leave a Comment

{ 2 comments… read them below or add one }

Asif Tasleem October 13, 2015 at 10:34 AM

Very nice article. As I am from Software Engineering background and understands EHR/EMR as well, I can appreciate the important issue you highlighted.

Clinical Care software should be revamped considering the advancements in software building technologies. For example use of websockets and AngularJS based dashboards could bring life to the Nursing desktops and interaction between healthcare provider staff.

Reply

Jerome Carter October 13, 2015 at 12:11 PM

Thanks for your comment! For too long the UI for most clinical software has been seen a a view into patient data without regard for how clinicians use information. It is definitely time to rethink UI in terms of current tech.

Reply

Previous post:

Next post: