Recent Changes
20/08/2014 – added stability, industry-acceptable licensing
19/08/2014 – initial writing
Introduction
This page discusses the question of evaluating e-health standards for longevity.
Over the last 20 years many attempts have been made to solve the wicked problem of health data interoperability, and more recently, ‘semantic’ versions of the same. The problem to be solved is essentially:
- semantic interoperability across and within enterprises,
- semantic interoperability between layers of functionality within a system,
- with an ultimate aim of being able to compute intelligently on the data
A much larger list of concrete needs can be constructed from this abstract description. Solving these challenges would result in great advances for:
- shared care, community care, since health records can be not just shared but treated as a single point of truth
- individualised, preventive medicine, since semantically computable EHR data are amenable to automated evaluation of clinical guidelines
- medical research, since data would be far more computable, and more data per patient could be aggregated from multiple sources
- public health, since aggregation of computable data of large numbers of patients will clearly enable epidemiological functions as well as routine health statistics
- cost determination, re-imbursement, fraud detection and better management of public and private payer funds.
The solution attempts have included many standards and specifications, such as Edifact, HL7v2, DICOM, HL7v3, HL7 CDA, EN/ISO13606, openEHR, ASTM CCR, SNOMED CT, ICDx, OMG Corbamed and HDTF (RLUS, EIS, CTS2) specifications, more recently HL7 FHIR, and certainly others I have missed here. They have also included many implementation technologies, e.g. (free/open) FreeMed, GnuMed, openMRS, Harvard SMART; and many commercial products and in-house systems.
None of these on their own has solved the problem, and attempts to connect them together (typically in government e-health programmes) have been far from successful – the costs of trying to integrate disparate standards have far outweighed the benefits.
We know the problem is horribly hard (some references here, and earlier analysis here and here), so it’s not as if the above efforts were generally superficial. Many health informatics experts have a good handle on what the difficulties are; the problem is that the challenges are inherently complex and interwoven, and in some cases probably genuinely intractable.
Every few years, a new approach to solving the problem comes along, and is proclaimed the new solution to the problem. HL7v3 for a long while fixed imaginations in this way, followed by ASTM CCR, HL7 CDA, and most recently HL7 FHIR. Industry hype aside, how could we evaluate any new e-health technology i.e. standard, product, or methodology? Is there any better way than simply waiting the requisite 5-10 years for results?
What I am aiming to find can be expressed as follows:
- criteria for assessing longevity, which translates into fitness for investment by industry;
- the shortest possible list of necessary criteria, i.e. failing any of the criteria is an indicator of likely failure in the long term.
Pieces versus the Puzzle
Before stating any success criteria, I need to address one point, the question of how much of the problem any particular ‘specification technology’ covers. Some technologies do a narrow job well, and can’t be criticised for not solving the overall problem if they were not designed to do so. So we need a way of talking about the pieces versus the puzzle. The way I prefer to do this is to call a total solution approach an ‘open platform’, and a partial solution a ‘platform component’. This is not the only possible language, but I think it is one many will recognise today. My idea of an open platform is described here.
What I am interested in here is mainly the overall platform, not just individual pieces. This is because the platform addresses the entire puzzle, or should have the potential to do so.
Conventional Criteria
In order to devise truly useful desiderata for long-lived e-health standards technology, we need to think a bit about what we are trying to evaluate. Most comparative evaluations look at characteristics such as:
- representational strength;
- typical tests: does standard xyz handle this or that data type? Can ‘participations’ be represented? Does it provide an ‘extract’ concept?
- specific use case coverage;
- typical tests: is the case where administration occurs with no prior order handled?
These functional capabilities are important, but they only relate to the use of the technology in the given situations. If reasonable formalisms and modern methods are used, specific concepts not provided for can usually be added. These kinds of capabilities are not actually good predictors of longevity.
Criteria for Evaluation of Longevity
The problem with representation and use cases as assessment criteria isn’t: can one extra concept be added? It is: how can the tens of thousands of clinical / medical concepts be dealt with? How can an endless avalanche of new and changing concepts be dealt with? What about all the local variation? This is a problem of innate complexity.
In another dimension, the question of translating complex models, ontologies, terminologies etc to executable systems has to be dealt with. A semantically powerful standard that can’t easily be turned into real health data, EHRs or applications won’t in the end be useful. And it has to work in the same way across vendors and countries, i.e. a widely dispersed technology community. This is a question of implementability.
Finally, we need to consider the characteristics of the built result – actual systems and their data. This is ultimately a question of utility.
To go beyond comparisons on the basis of functional semantics, we need to consider the things that usually prevent long term success. Long term success can be understood as sustainability, which translates to making complexity manageable, and making a standard usable by implementers.
Semantic Scalability
Innate complexity is a big part of the problem – it corresponds to 3 killer challenges:
- diversity – sheer numbers of concepts and their interrelations
- variability – variation in use of ‘standard’ concepts in local usage contexts
- change – constant change over time
We can think of the property of a given technology that accommodates these as semantic scalability. Note that we are not interested in whether, with near infinite effort and resources a technology could theoretically be made semantically scalable, we are interested in scalability being routinely and economically available.
Principle: a semantically scalable specification technology is one that routinely scales over the domain, over usage geographies and over time.
Implementabilty
Turning to the implementability of the technology, I believe this can be understood simply as:
- Can the technology be transformed into effective tools, systems and applications?
This is easy enough to understand superficially – it translates to: can the technology be used to build software? A somewhat deeper understanding would be to say: can the technology be routinely and economically translated into executable software solutions? This strongly implies that normal developers (as opposed to health informatics PhDs) could use it effectively. It also implies that the built solutions have lost none of the semantics of the model / specification representation, i.e. that the means of translation are not lossy.
Principle: an implementable specification technology is one that can routinely and efficiently be transformed into effective semantically-enabled and future-proof solutions by normal developers.
I have assumed here that the ability to connect to legacy systems comes under the general implementability heading, since a large proportion, often the majority of ‘implementation’ is just that – integrating with existing systems and data sources and sinks.
Utility
We need to consider finally the ultimate utility of the technology. A specification technology may in fact be scalable, and implementable, but the implemented results are not guaranteed to do anything useful, unless they address relevant concerns and at an appropriate level of sophistication. An implementable standard that is too simple or addresses the wrong things isn’t ultimately of much value.
With respect to utility, we therefore ask questions primarily about the data:
- the ability to represent and access data at the finest granularity. Technologies that make it difficult to get to the data (e.g. document-based EHR solutions) reduce usability.
- the availability of a semantic query methodology. This means: is there a routine way of constructing queries on the data that is based on its content semantics, and that works in the face of diversity / variability / change? There is an implication here that the querying method must address the level of detail available in the data.
Principle: a useful specification technology is one that enables the building of solutions whose data are accessible and queryable at the finest granularity.
These don’t on their own guarantee relevance, but if we assume that the semantic aspects have been engineered to correspond to domain concerns, then there is a very good chance they will.
Building a Platform
We still need to include the platform concept mentioned above. This means evaluating the specification technology in terms of whether it is either a) a holistic platform on its own or b) can fit into an identified coherent platform. We can think of this property as platform friendliness.
Principle: a comprehensive specification technology platform covers, or is extendable to cover, the broad gamut of problem spaces and use cases within the overall domain.
Governance over time
A standards technology may satisfy most or all of the above criteria – being technically powerful, implementable and platform-friendly – but it will eventually die regardless, if it is not managed properly over time. There are arguably 4 aspects to governance that relate to survival of a standards technology, as follows.
- domain involvement in setting scope and requirements
- industry involvement in determining priority and time-frame of changes to be built into any given release
- coherent management of release versioning over time, such that the standard appears stable over time
- routine problem reporting and actioning mechanism
The first of these relates to relevance. Standards that are developed without actually taking into account domain needs are very unlikely to lead to solutions that do anything useful for the professionals of the domain.
The second is important, because industry (i.e. solution builders) are the agents of production, and cannot effectively be dictated to in terms of pace of change. Attempts to do this in the past have eventually failed.
Release management is important. If releases are not planned and signalled to industry and at the same time, backward and forward compatible in reasonable ways, industry cannot work with them. At the same time, if there is no easy way for developers and users to generate issue / problem reports, and no routine mechanism for addressing these, the technology will appear to industry as unresponsive.
Finally, the governance structure must provide a mechanism for all stakeholders to raise issues on any of the published deliverables, and it must action those issues in a timely fashion. This is one of the small number of jobs that is actually getting easier over time, with tools like Jira and Github issue tracking. Note however, that the tooling is just support for properly defined and implemented change control process.
Ultmately, the effectiveness of governance has to be assessed in terms of accountability: does the organisation actually produce the outputs required by its stakeholders? What happens if it doesn’t? Answers to these questions must be available in the governance framework.
Legally and Commercially Acceptable
The last category of criteria is the legal / commercial. A standards technology that is at the outset, or can become legally and commercially onerous will fail in the long run. Today, protection against future privatisation of the standard IP and rights to use is required, and generally only accepted in the form of recognised open source and open content licenses. The history of under-use of ISO standards (some perfectly good – apparently) also tells us that standards that themselves cost money are not likely to achieve wide use. However, fees for various kinds of membership and sponsorship of the development community appear to be generally acceptable and arguably necessary.
An Evaluation Tool
The following tool for evaluating specification technologies in e-health is more or less a restatement of the above, in the briefest possible terms, and using a form of words amenable to easily detecting problems with the standard under evaluation. As mentioned at the outset, it is intended to be the shortest possible list of necessary criteria – i.e. a failure to meet any criterion means that the standard’s longevity is doubtful.
Category | Sub-category | Criterion | Possible approach |
---|---|---|---|
Platform Friendly |
Platform framework |
Does the technology define overall elements of a platform into which recognisable specifics could be plugged, e.g. information models, content definitions, workflow definitions, APIs, code generation tools, etc? | Requires a comprehensive design effort. |
Platform component | Does the technology define something that can be properly integrated into an existing platform definition? | Either co-developed with other components, or else inherently low coupling / dependency on other components. | |
Semantic Scalability | Domain Diversity | Does the technology provide a practical method of dealing with potentially massive clinical content diversity? | The modelling method has to enable domain experts to do most of the modelling work directly, using tools. |
Local Variability |
Does the technology provide a practical method of dealing with potentially massive localised variability? | It must be easy to create localised variants of standard models. | |
Change over Time |
Does the technology provide a practical method of dealing with ongoing change in information requirements? Due to e.g.:
|
Representation of content outside software, enabling data to be future-proof. | |
Implementability | Does the technology provide a way for clinically complex models to be converted to a form consumable by ‘normal developers’ to build ‘normal software’? This should include tools and means of integrating with existing systems and data sources and sinks. | Model-to-schema and model-to-API code generation. Ability to generate schemas representing specific messages and data sets. | |
Utility | Data accessibility | Is the standard designed such that all data elements are easily computationally accessible at the finest granularity? | Typically requires a data rather than a document based architecture. |
Query methodology | Does the technology provide a way to query fine-grained information based on models of content, not physical representation (physical DB schemas, specific XML doc schemas etc)? | Content models provide a technical basis e.g. paths for constructing query fragments. | |
Responsive Governance | Domain-led requirements | Are requirements statement and prioritisation led primarily by domain experts? | Standard requires a forum and means of participation open to busy domain experts, i.e. healthcare, other medical and secondary use professionals. |
Industry-influenced roadmap | Can the roadmap of future releases, i.e. allocation of changes and timing of each release be influenced by industry implementers, as well as other stakeholders, i.e. government, NGOs, provider organisations? | Standard requires a forum and means of participation open to industry for the purpose of roadmap definition. | |
Release stability | Are releases over time are coherent with respect to each other in ways that enable economic upgrading of implementations (industry side) and smooth deployments of new versions (user / provider side). | Backwards compatibility rules need to be developed and published, and tools and human oversight used to ensure that releases contain only expected levels of change. Recommendation: follow semver.org release identification. | |
Responsive feedback mechanism | Does a visible and easy-to-use mechanism exist for reporting issues and problems with all levels of artefact, i.e. requirements, current release, reference implementation(s)? | Use of modern issue tracker tools, plus active monitoring by the governing body. | |
Accountability | Is the governing organisation transparently accountable to key stakeholders for the outputs of the organisation? | Requires appropriate high-level governance, and an active monitoring mechanism. | |
Commercial acceptability | Free core IP | Are the standard and its core computable artefacts free to use? | No charge should be made for the core open artefacts need to implement and deploy the standard. |
IP openness futureproof | Are there mechanisms to prevent the IP of the standard and related artefacts from being unilaterally privatised or otherwise made commercially unacceptable over time, including to small companies and user organisations? | Typically solved with recognised open source and open content licenses, e.g. GPL, LGPL, Apache 2, Eclipse, Creative Commons and so on. |
Acknowledgements
I have put the material here together based on personal experience and knowledge, much of which has come from colleagues over the years. Thanks for all those conversations, even if you don’t agree with my conclusions.
With respect to this specific article, I have made additions on Governance and Commercial/Legal issues based on comments from Grahame Grieve, head developer of FHIR, and long term e-health standards developer and implementer.
* * * * * * * *
copyright 2014 Thomas Beale