Well, Hello! Yes, it has been a while since the last post.   Thanks to everyone who wrote to ask if everything was okay. Everything is fine.   Writing the monograph has proven to be as challenging and as time-consuming as I dreaded it might be.   The good news is that I am very pleased with the progress, thus far.

At the outset, I created an outline that was partially based on what were to be the theory chapters for the workflow book. That initial monograph outline called for six chapters– a background chapter explaining why a theory was needed, followed by chapters that introduced the theory, explained its key components, demonstrated its key principles using BPMN, and a final chapter discussing implications and future research paths.   The outline survived until I started the third chapter.

The third chapter was originally intended as an explanation of theory components with examples, and since the theory makes heavy use of process concepts, BPMN was to be used.   However, a problem arose. The theory introduced a number of clinical process-specific terms and concepts. Rendering these concepts in BPMN would have required that an explanation of BPMN modeling be intermingled with theory terms and concepts, potentially making the discussion too convoluted and difficult to follow. Having run into a major roadblock, I spent nearly a month agonizing over what to do. After slicing, dicing, and finally tossing the original outline, I realized that I needed to explain how to apply theory principles and concepts independently of BPMN rules and concepts, so everything came to a halt.

The solution emerged a few weeks later when, in revisiting old notes, I decided to use a notation that I toyed with and eventually abandoned on discovering YAWL.   Around the same time, I realized that explaining the theory properly, even when using the notation, required a formal set of modeling concepts with definitions that would guide readers on the correct approach to applying the theory. Again, I returned to previously abandoned notes/ideas; but, this time to those created originally for clinical software design. Reviewing them, I realized that the old software modeling ideas could be applied to the problem at hand.   (This was ironic since I had abandoned the software modeling approach for the lack of a sound basis for the methods it advocated.). It was sometime in August that everything clicked, and the new notation and the new modeling paradigm were born!   The outline has been redone, and there will be maybe seven chapters, eight at most.

Now, there is a chapter dedicated to modeling concepts, terms, and definitions followed by one dedicated to the new notation. The notion of modeling clinical processes has been completely separated from the notion of creating workflow applications. I hope this separation makes learning the material easier while encouraging wider application of the theory and model framework to clinical care issues that are unrelated to software development. The first four chapters, which now address background, theory, modeling, and notation, will be useful across a range of clinical care problems and challenges. And now that those chapters are finished, I will add chapters that offer a brief overview of YAWL/workflow patterns and BPMN 2.0 and one that explains the issues encountered in using the modeling framework with BPMN 2.0. The implications chapter will still be the final one.   Here is one intriguing implication: The model framework and notation seem to be useful for creating specifications for clinical care systems that are not process-aware.

Process-Aware Analysis and Design
One of the best things I ever did for professional growth was taking the time to read, cover-to-cover, books on software architecture and design, and then try my hand at applying what I learned. By “design,” I mean the inner workings and structure—functions, objects, design patterns, and algorithms.   Over the years, I had read many books on programming and written a lot of code in different languages.   I took programming courses in college and learned BASIC before college. I have interacted with a lot of programmers.

Many programmers are not formally-trained in computer science or software engineering, which is not necessary for many applications, such as building a front-end to a database as is the case in many business applications.   Now–and I will get pushback on this–what I have learned is that the more complex the software system, the more important knowledge of software architecture, algorithms, and discrete math becomes.   One does not necessarily need a CS degree to learn these things, but an understanding of these topics is critical for anyone working on complex software systems. Clinical care systems affect lives just as much as software in a 747.   Clinical care software is complex.

It took a while for me to get my head wrapped around object-oriented analysis and design (OOA&D), which is a formal approach to software development.   However, one can create applications in Java, C# or any OOP language without deeply studying OOA&D. The quality of the application may be suspect, but one can build something that works. Any programming language has the potential to be the proverbial rope that wise people shun collecting.

One of my favorite programming blogs from the early 2000s used to discuss OOP with mild, and occasionally, overt contempt.   The common refrain was that anything that can be done in an OOP language could be done in C, which is true, but beside the point.   The point that OOP skeptics missed was that simply getting programmers to use an object-oriented programming language was never the goal when object-orientation was being investigated. Rather, the goal was a more formal approach to software design and development, and one that ultimately would lead to more robust, reliable, safer, scalable systems.   Object-oriented programming was/is only one part of OOA&D.

To be fair to the object-orientation skeptics of 15 years ago, OOA&D methods were in their infancy, and supporting tools were new. Java was released in 1995, C# in 2002. The definitive GOF design patterns book was released in 1994. It took a while for everything to come together—methods, best practices, tools, languages—to make the advantages of object-oriented approach clear. I see the same progression with process-aware analysis and design.

BPMN, YAWL, and workflow patterns sprang from attempts to build computer applications to automate business processes.   And, as I have learned, adapting them to clinical processes can be difficult.   The main reason for the translation difficulty is the lack of process-aware analysis and design methods that focus on the nuances of clinical processes.   BPMN 2.0 and YAWL are Turing complete languages; they have everything required to create applications.   However, the problem one faces today when creating process-aware clinical care applications is exactly analogous to that of someone adopting Java in 1995 when OOA&D methods were still maturing (UML was released in 1997). There are no best practices or methods for clinical workflow analysis and modeling, no guiding standards for determining how different types of clinical processes are best modeled, no books.

Many professional societies are asking HHS to delay MU requirements because vendors have not been able to keep up with 2015 certification requirements.   Many MU EHR requirements were intended to “help” clinicians, and the assumption was that adding features==help (Is the EHR Defunct?). Nope. Attempting to help clinicians without a detailed understanding what they do and what help is desired is like giving someone a clueless assistant to whom one has to explain how to do everything. Who would not rather work alone?

Process-aware analysis and design (PA&D) methods are essential for creating software that helps when asked and otherwise, keeps quiet. Applying this standard to current EHR systems would mean most of what exists today would have to be replaced.   For those who protest that the hegemony enjoyed by current EHR vendors is too great to overcome, I offer that one only look at the market leaders in various fields who, over that last 25 years, have disappeared after being considered beyond challenge.     Processes, which current software systems address ham-handedly, are the key. The arrival of PA&D tools and methods customized for clinical care will shift the balance in the market.   The tools to build a better mousetrap have been developing over the last 10 years; only PA&D tools are missing. The future of clinical care software is mobile + the cloud + processes.   Which brings me to…

Apple Chips
I gave up my flip phone almost exactly five years ago. I simply did not see a reason to own a smartphone. Everywhere internet access was not a need, and texting was fine on the phone I had. Emails were checked every few hours. I got a smartphone because not doing so would have made me the odd man out of the family.

The apps I used were convenient but less capable than their web-based cousins.   The value of a smartphone as more than a simple convenience became apparent one winter when my car would not start and I had to be somewhere in 20 minutes. It was a less than ten-minute drive, but there no other way to get there.   I tried calling a taxi, which was a wasted three minutes on hold.   Road-side assistance was an hour wait.   I had heard of Uber, so standing on my front porch, I downloaded the app, registered and arrived on time. The combination of computing power, phone, GPS, and internet access saved the day!   Point taken…

Smartphones and tablets, with their collection of sensors, video capabilities, and voice recognition are far more compelling as clinical care devices than desktops or laptops on carts. One major problem has been processing power and memory.

The lack of processing power limited what early apps could do. The surprise here has been Apple’s efforts in advancing mobile processors. When the A7 chip was introduced in 2013, years before anyone expected a 64-bit chip, the reaction in many quarters was, “So, what”? Many even said that the only value of a 64-bit CPU was addressing more memory. (No one who knows anything about computer technology would say this out loud.)   Processing power is important; registers are a big deal. Since then, Apple has continued to push forward, the most recent chip, the A11, includes the “Neural Engine,” hardware support for neural networks capable of 600 billion operations per second.   Benchmark rumors say the chip is faster than my 2012 MacBook Pro.

High-end smartphones and tablets are real computers with more types of potential inputs and better communications capabilities than desktops and laptops.   Thanks to high-speed Internet access, the cloud works as a data store. (BTW, there is a formal definition of cloud computing from NIST). BPMN-based workflow tools are widely available for a fee or open source. Finally, document and graph databases have matured and are available via the cloud.    The infrastructure required to support robust clinical applications on mobile platforms is here, and it works.

Voice input is getting creepy.   I just upgraded to MacOS Sierra, which includes Siri. Siri has been turned off on my phone since I have had it. A few days ago, I tried it on my five-year-old Mac, and its accuracy is much improved. Alexa is beginning to make me paranoid.   Google, Apple, and Microsoft have made machine learning tools readily available for developers.   The processing power available to these tools is making me move into Elon Musk territory. (I’m only half kidding.) Having built a neural net application in Modula-2 in 1987, I have always found the technology intriguing. Now, I am forced to consider what will be possible in 10 years. Season 3 of Humans may be less entertaining…

Sorry for not updating the resource pages for so long. Regular updates will resume on both EHR Science and Clinical Workflow Center on September 19th.

Until next time…


Leave a Reply

Your email address will not be published. Required fields are marked *