Control-flow patterns capture process information related to movement from task-to-task. However, task execution does not tell the whole story of workflows. Work usually consumes and produces information as well. Well-designed systems anticipate information needs. With this in mind, workflow modeling should include information flows as well as task execution sequences.
In Workflow Data Patterns, Russell and ter Hofstede, et al. identify more that 40 patterns of data use. The data patterns are grouped into four categories based on visibility, interactions, transfers, and routing. The authors introduce a hierarchy of workflow components to illustrate data movement. The table below describes them. They are arranged from the lowest level to the highest. (Definitions are adapted from Workflow Data Patterns.)
|Atomic Task||A self-contained unit of work|
|Block Task||A task that consists of a sub-workflow (a series of tasks that accomplish a specific goal)|
|Multi-instance Task||A task that may have multiple instances of itself running independently|
|Case/ Process Instance||An executing instance of a workflow model|
|Workflow Model||A collection of tasks that are connected in a directed graph which captures the execution sequence.|
|External Environment||The operating environment of the workflow includes external applications, data stores, etc.|
Notice that atomic tasks have no lower-level components whereas block tasks do. A multi-instance task has more than one copy of itself running, and each is independent of the others. All tasks when combined form a case, which is just an executing instance of a workflow model. Finally, an environment is where all cases/workflow models live.
Application to Clinical Workflow Modeling
In the paper mentioned above, data patterns are discussed primarily at a level suitable for software design and simulation of workflow systems, not from the standpoint of a person modeling a clinical workflow. Fortunately, however, the patterns make perfectly good sense when analyzing workflows that occur in typical clinical settings. This is the perspective on which I will focus.
Data visibility patterns describe the level of access that various workflow components have to data. The lowest level component is an atomic task while the highest is a case. Access to data elements can be restricted, as necessary, to assure proper case execution. For example, data elements in an atomic task cannot be seen outside of that task. Thus, visibility follows a hierarchy. Data visible at case-level can be seen by every task. However, data within a task can be restricted so that it can be seen only within that task.
In software design terms, this is basically about controlling variable scope—an essential feature of all modern programming languages.
Information can be restricted to only those who absolutely require it. The medical records staff receive outside chart requests of which medical assistants are not aware. Conversely, allergy information is used among clinical personnel, but not front-desk staff.
While visibility patterns address workflow components’ access to data elements, interaction patterns describe how components may interact with one another when sharing data. Interaction patterns cover all possible permutations of component data sharing: task-task, case-case, case-task, block-task, task-environment, environment to case… You get the picture.
Functions, methods, procedures, and components may share information as needed to perform their functions. These can be direct invocations or via APIs, web services, etc.
Information is shared as needed. Insurance information captured by front-desk staff is shared with lab personnel. Conversely, allergy information is used among MAs, doctors and nurses and sent to pharmacy staff as required.
Data transfer patterns are somewhat more technical in nature in that they deal with the actual means of transfer. For those who know a programming language, these patterns will be familiar. In programming languages, when variables are passed directly between functions/methods/procedures, it can be done by sending a copy of the variable’s value or by sending the memory location where the variable’s value is stored. The former is referred to as passing by value, the latter by reference. The key difference between the two is that if a variable is passed by value, a copy of it is passed and any operations done on the copy do not affect the original. When passed by reference, however, the actual memory location where the value is stored (the original) is accessed by the receiving function/method/procedure, and if the receiving function alters the value, that alteration is permanent. Transfer patterns also account for concurrency and the ability to prevent more than one function from attempting to alter a value simultaneously.
Coming up with a direct analogy in human workflows is difficult. Perhaps a good example would be the difference between giving someone an original document verses a copy of the original. If they lose the original or change it, and there is no other copy, you have a problem.
Data-based routing covers patterns that describe how data may be used to control the execution of a case. For example, a task may be made to execute if a variable has a certain value, or be made to wait until certain data exist.
There are many possible examples of data changing task execution–a value exceeding a threshold, waiting for a data element to reach a required value, or terminating all tasks when a critical value occurs.
In the scenario used in previous posts, the verify-insurance task will not run until insurance data has been collected from the patient. This is an example of waiting for data. A patient with a temperature of >105 being seen before those who arrived earlier is an example of a data value alternating task execution.
Just as design patterns have helped software engineers by offering a set of solution templates to common software design challenges, workflow patterns provide templates for modeling and documenting processes. Workflow modeling can be done for software implementation, software design or simply to improve how things work. In all three instances, coordinating tasks and the data they produce/consume is the key to efficiency, productivity, usability and safety. Having a formal set of patterns, control-flow and data, that are mathematically sound and are readily translated to software design and human workflows provides an incredible tool for building expressive, usable and responsive clinical systems.
The work done by the authors mentioned in this series is nothing short of revolutionary, providing a seamless set of concepts and tools that move from theory to real-world software modeling tools and workflow management systems. However, don’t take my word for it. If you are workflow analyst, software engineer, informaticist, educator or healthcare professional interested in better software, care quality, or simply want a better way to express how things work, take the time to investigate the activities of the Workflow Patterns Initiative. As Adrian Monk would say, “You’ll thank me later…”
(I am considering making this series of posts available as a single PDF that includes additional citations and case studies. Is this a good idea? What would you like to see included? Please provide feedback using the contact form or forum.)