Algorithms in Clinical Informatics: The Path from Here to There

by Jerome Carter on December 9, 2013 · 0 comments

Designing software, like practicing medicine, is in essence about solving problems.   Patients do not present with a series of multiple-choice answers from which one may select, and complex software systems are never built using stock requirements.   Both activities are as much art as science, and the results vary greatly among practitioners.   Like most people, I never considered the practice of mathematics to be in any way similar to either software design or medical practice, until I read The Advent of the Algorithm by David Berlinski (1).  Now, I see them as more alike than different.

Learning mathematics from a book in modern times, hides, perhaps—and certainly unfortunately—the messiness that preceded the knowledge now presented pristinely in texts.   We never see the work, the blind alleys, and the disputes that now allow one to take a derivative or make use of set operations.   Building on a foundation of logic, over the centuries, mathematicians have created tools that we all use without a second thought because we know they work—every time.

Reading Berlinski, one learns how the algorithms we take for granted (or cringe at, depending on what happened in high school algebra) came into being.  He starts the history of algorithms in the work of Leibniz and brings it up to modern computing.   In between, he recounts the efforts of all who worked to assure that the steps taken and rules used to solve a problem are sound and reproducible. Today’s problem-solving recipes came from many years of messy kitchens.   As an informaticist, internist, and programmer, I find this to be extremely comforting.   Algorithms take time.

What is an “algorithm”?
Cormen et al. offer this view of algorithms (2):

Informally, an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output.

We can also view an algorithm as a tool for solving a well-specified computational problem.  The statement of the problem specifies in general terms the desired input/output relationship.  The algorithm describes a specific computational procedure for achieving that input/output relationship.

The emphasis placed on the input/output relationship is one that I find quite helpful.   Functions and relations tie inputs to outputs.  (Functions are a type of relation.) Most people learn about functions in terms of real numbers (e.g.  f(x) = x3) and learn about relations when they first encounter relational databases.  This usually means that these constructs are not used for directly analyzing informatics problems, which is unfortunate.

Functions and relations may be applied to any set, not just numbers.  When the inputs and outputs are numbers, there are a variety of proven methods (i.e., algorithms) for mapping inputs to outputs.  The methods that one learns for performing long division, factoring a polynomial, or calculating a standard deviation are algorithms.     When the inputs and outputs are medications, patients, signs, symptoms and lab values, algorithms are still required.  However, I am willing to wager that few informaticists, or healthcare professionals who prescribe a medication or add a problem to a chart, see themselves as evaluating functions or forming relations. Yet, that is exactly what is occurring.  In fact, I think it is fair to say that every  patient-data link is either a relation or function, whether on paper or in an electronic system.   Not convinced? Here is an example.

Problems are relations
When using a paper chart, a new problem is officially assigned to a patient by writing it in a specific chart location–the problem list.  This action can be viewed as the simple act of writing a string of characters on a piece of paper. Alternatively, it could just as easily be viewed as the output of a relation in which one or more inputs (e.g., a patient and one or more signs, symptoms, test results, etc.) are tied to a specific output in the form of a problem or diagnosis.

The differing viewpoints can significantly impact how a system is designed.  The former view could result in a software requirement that specifies that in order to record a problem, a clinician needs to add one or more terms to a list.  Satisfying this requirement could be met in a system that simply allows for editing of text-based lists.  The latter view, however, recognizes that a relation is involved.   Here, the analyst is forced to ask what algorithm should be used to assure that identical inputs always result in the same output.  This is the question that mathematicians started asking 300 years ago, the answer to which Berlinski has documented so well.  Today, this question is just as relevant for clinicians and informaticists as it was for mathematicians three centuries ago.

Rebooting clinical software
Clinical software results from an interaction between software engineering and health care.  That interaction occurs because of the need to solve problems.   Some are basic, such as recording actions and displaying patient information. Others, such as recommending a course of action, optimizing clinical work, or enabling semantic interoperability are complex.   Unfortunately, we have no algorithms for accomplishing these goals that match the rigor of mathematical ones.   Nor do we have an accepted theoretical framework or a practical mechanism for testing or blessing clinical informatics algorithms.  That fact obviously  has not prevented the development of clinical software.  Though, very likely, it has a lot to do with why there are so many complaints and debates about EHR systems.

Informatics analysis for software design
My evolving view of algorithms and growing appreciation of functions and relations has changed the way I analyze informatics problems.  Seeing functions or relations in seemingly mundane acts such as assigning a gender to a patient or adding a new problem to a list, improves the quality of the resulting analysis because doing so removes the illusion that being simple in appearance is equivalent to being simple computationally.   Why?  Because functions and relations always require algorithms, and recognizing that an algorithm is required promotes critical thinking.

An algorithmic-based approach to informatics analysis should work even for software interactions such as reviewing information.    Looking at a medication list, from an input/output standpoint, takes the user’s starting knowledge as input and his/her updated knowledge as output.  Of course, one cannot know exactly what exists in someone’s head. However, for the purposes of software design, zero knowledge can be assumed as the default to guide initial information displays.  From here, user-configurable display options could provide a means to adjust what is presented to the user as needed.  Taking this approach, any information review activity could be enhanced by designing an algorithm that creates a standard view of basic information while providing users the ability to adjust what is presented according to their needs.

Examples of basic information might be alerts for medications that are soon due for a refill, those that may require adjustment based on newly-arrived lab data, or the availability of a new generic, etc.  The key point is that informatics analysts would gain additional valuable insights into clinical processes that cannot be readily obtained from end-user observations and interviews.  Even better, by adapting mathematical constructs and trying to match their rigor, clinical informatics as a field would gain tools and procedures essential for growth and formalization.  It has worked for mathematicians, why not for informaticists? Clinical algorithmics anyone?

  1. Berlinski, David. The Advent of the Algorithm: The 300-year Journey from an Idea to the Computer. San Diego: Harcourt, 2001.
  2. Cormen TH, Leiserson CE, Rivest L., Stein C. Introduction to Algorithms. Cambridge, MA: MIT, 2001.
Facebooktwitterpinterestlinkedinmail

Leave a Comment

{ 0 comments… add one now }

Previous post:

Next post: