Beyond the hype: evaluating e-health standards

A new e-health standard comes along every couple of years. In Gartner hype cycle terms, it starts out on the rise toward the ‘peak of inflated expectations’, then falls into the ‘trough of disillusionment’, before either dying or rising again over the ‘slope of enlightenment’ to a ‘plateau of productivity’. Most standards and e-health technologies (standards + their tools and artefacts) die before getting to this plateau. But why? What’s wrong with them? How can we pick a winner?

(Gartner Hype Cycle, from wikipedia)

The latest hyped e-health technology is of course HL7 FHIR – Fast Healthcare Interoperability Resources.

On the recent ONC hearing I attended, Wes Rishel (ex Gartner and an e-health standards veteran) raised this very question during question time. I’ve been thinking about these issues for many years, but never tried to answer that exact question. I have now put together a brief analysis – Desiderata for successful e-health standards. A summarised version can be obtained by using the ‘Criteria’ column from the final table. The idea here was to produce the shortest list of criteria, each of which, on its own detects failure. The criteria are as follows:

  • Platform
    • Does the technology define overall elements of a platform into which recognisable specifics could be plugged, e.g. information models, content definitions, workflow definitions, APIs, code generation tools, etc?
    • Does the technology define something that can be properly integrated into an existing platform definition?
  • Semantic Scalability
    • Does the technology provide a practical method of dealing with potentially massive clinical content diversity?
    • Does the technology provide a practical method of dealing with potentially massive localised variability?
    • Does the technology provide a practical method of dealing with ongoing change in information requirements due to, new science, -omics, drugs; new clinical protocols and methods; legislative changes; changing patient / consumer needs?
  • Implementability
    • Does the technology provide a way for clinically complex models to be converted to a form consumable by ‘normal developers’ to build ‘normal software’, including for the purpose of integrating with existing systems, data sources and sinks?
  • Utility
    • Are all data elements easily accessible at the finest grain?
    • Does the technology provide a way to query fine-grained information based on models of content, not physical representation (physical DB schemas, specific XML doc schemas etc)?

According to the above, if an e-health standards technology fails any of the above, it won’t survive in the long term, and sunk costs in the short term will become an economic liability and possibly obstruction to finding standards technology that actually will work.

These desiderata are a work in progress, but I would argue that the items above are somewhere close to a reasonable Occam’s Razor for testing health standards technologies. i would particularly point to the semantic scalability category. If we agree that today e-health is broadly about the data, then we must by definition agree that successful standards technology must address the content – i.e. what the data say and how to compute with it – in a way that accommodates complexity and change over time. Note that these criteria call for practical methods, not just in-theory ideas about dealing with complexity.

Note also that clinical content diversity and local variability are not the same thing. The former is to do with the sheer number of concepts in the domain, and is the reason why SNOMED CT is huge and will only get bigger. The latter is to do with localised variants of standard domain concepts, due to localised practice differences, differences in rules, management and a myriad of other things. The typical example is data differences in e.g. discharge summaries from different locations. An e-health standard that does not account for this reality in a sustainable way, will fail in the long term.

Under the category of Utility, I have included just two proxies for testing overall value of solutions based on the technology. These are essentially, again, about being able to get at the data. There are probably others that could be added here, but I contend that if querying can’t be properly solved, then other utility criteria are probably irrelevant.

How do the various e-health standards stack up using the above tool? I will leave that to later posts and other authors for now. But I have tried it out on half a dozen well-known standards technologies in health, and so far the tool’s predictive capability is looking good.

About wolandscat

I work on semantic architectures for interoperability of information systems. Much of my time is spent studying biomedical knowledge using methods from philosophy, particularly ontology and epistemology.
This entry was posted in Health Informatics, openehr, standards and tagged , , , , , , , . Bookmark the permalink.

Leave a comment