One of the principal reasons for why I and others are proposing (some) type hierarchy in the FHIR Admin resources is as follows (my earlier post on this). Working Groups (i.e. committees) building Resources are currently in the situation of defining Elements in a Resource, i.e. defining name, type, cardinality etc. The Resources are a typed system. Now, in places where Reference() is used, typing is being subverted; they are no longer stating a necessary type, they are trying to think of all possible use cases, and stating a corresponding list of types of instances in those use cases.
As described in some detail in this earlier post on the FHIR formalism, a number of FHIR Resources contain ‘choice’ attributes of the form attribute[x], such as the one shown above in Observation. These are mapped in the FHIR UML to a ‘Type’ type, as follows.
This is not particularly helpful to developers and does not make for software that can easily treat the value field as a ‘data value’, which is the clear intention. The problem occurs because, even though there is a well defined collection of FHIR data types, there is no parent type for them to use in other contexts. This post provides a proposal for how to fix this.
FHIR has no semantic inheritance, only a generic structural inheritance of Resources from abstract Resources like DomainResource, Element and so on;
Accordingly, the following symptoms appear in the models:
Resources representing closely related entity types, such as Person, Patient, Practitioner, etc contain numerous separately replicated copies of common attributes rather than re-using any common definition (this is true across the board, not just for Admin Resources);
There are no abstract supertypes available to use at the types of other attributes, and thus no useful type subsititutability in software, other than for the generic supertypes mentioned above;
To compensate, FHIR uses nearly 200 ad hoc choice type definitions which do not constitute reliable semantic types (i.e. it’s unclear what the criteria for the type of a field like Observation.subject really are), or map properly to normal typed programming languages.
As a consequence, the FHIR Resources:
are brittle, in the sense that unexpected impacts are likely when changes are made to the main Resources to adjust ad hoc typing;
do not support classic fine-grained software re-use, due to the replication approach;
are likely to limit rather than improve true interoperability, as developers make local variations to models to reduce implementation difficulty.
However, there are changes that can be made that will greatly improve these characteristics, making life much easier for developers and Resource maintainers alike, and extending the life of FHIR. There would be some impact on current profiling efforts, but not a great deal, and certainly worth considering with respect to the long run, which is the next 10+ years of FHIR adoption and implementation around the world.
Some FHIR purists may take exception to the proposal below; I would urge them to consider firstly the value of standard modelling techniques properly applied, and secondly to seriously consider the challenges in maintenance, evolution, implementation and data processing of the next 10 years and just ask the simple question: can we make FHIR significantly better than it is today, reducing costs and improving interoperability for everyone?
What follows is proposed not in the expectation that it will be implemented, but as a basis for thinking about what might be possible at this stage.
I started working in the Health IT area in 1994, on a major European Commission funded project. I attended years of standards meetings at HL7, CEN and occasionally OMG and ISO from 1999 to about 2012. And I’ve observed the constant failure of standards (through inappropriate hopes and expectations) to provide anything like a sustainable solution for interoperability. Here are the lessons I draw from this.
Premise #1: interoperability is an outcome, i.e. an emergent quality, not a created input. You can’t achieve true (automatic) interoperability by trying to engineer for it after the creation of the systems you want to interoperate. Why? Because interoperability happens at the touchpoints between parts of a system. To achieve it, you have to have a) knowledge of and ideally b) some say in the architecture of those components – only then will you understand how to create interoperability at the interfaces in question.
Premise #2: de jure standards should not be mistaken for architecture. Today’s HIT standards are attempts to engineer post hoc interoperability with no knowledge of the system components – they are essentially various forms of message on the wire. The result is O(10,000) mutually inconsistent interoperability points, not an interoperability-enabled architecture. Bureaucrats routinely mistake standards for architecture, saying things like ‘we must base our system on standards x, y, z’, or ‘we’ll design the system based on standards’. Only do that if you want to repeat the cycle of death.
Conclusion #1: any large healthcare delivery organisation or environment has no choice but to define its own architecture, which means thinking about its own data, processes and knowledge assets – in depth. The outline of how interoperability will be achieved at any interface point must be part of that architecture. Only then can any published standard be considered for use, if it truly fits and provides a language of interchange that will be in wide use for the same purpose.
Conclusion #2: the only way to engineer standards that will result in sustainable interoperability is to define an open architecture. ‘Standards’ will just be pieces of that specification that apply at interface points.
There is a hidden requirement for success, which is as follows:
Requirement: to achieve interoperability, common knowledge resources must be defined and used across the entire domain – i.e. ontologies, terminologies, definitions and models of higher-level artefacts such as data sets and guidelines.
With respect to this point, the Health IT domain has already achieved quite a lot at the terminology level; has good de facto standards for shared data sets (openEHR archetypes,Intermountain Healthcare Clinical Element Models, although the SDOs still struggle with the approach); is only just starting to understand ontology; and is making some initial progress in the process and guideline domain. Most of these are still poorly integrated, but the direction is clear.
The details of how to engineer for sustainable interoperability are mostly outlined in this previous post.
The underlying lesson is to recognise that any environment in which interoperability is desired is a complex system, operating on multiple hierarchical levels, with emergent properties at each. Interoperability is one of those properties.
In this post I document further observations on the FHIR resources, made during the transcription of the DSTU4 FHIR resources to the BMM format used in openEHR, as described here. This post examines the definition of process state in FHIR resources.
FHIR contains a number of resources that represent workflow actions in healthcare, including ServiceRequest, MedicationRequest, MedicationDispense, Appointment and so on. All of these contain a ‘status’ attribute which is coded with a local code-set representing possible lifecycle states of the action. Here is ServiceRequest:
This post continues the review presented in the previous post, where I looked at the Administrative resources of FHIR. Here I take a look at the formalism used in FHIR, i.e. how the resources (and profiles) are formally expressed. FHIR resources are described in terms of a custom formalism expressed as hierarchical tables. The appearance of a resource, along with the elements of the ‘language’ is shown above.
It has to be said in passing that the FHIR website and various visualisations, linking etc is a masterpiece of content-driven presentation.
We have been making steady progress on the openEHR Task Planning specification and visual modelling language (TP-VML) for clinical workflow. One of the differentiators of Task Planning, is that, like YAWL, it is designed as a formalism for developing fully executable process plans. This means that all the semantics of a TP Plan are formally defined and executable in a TP engine. It also means that the accompanying visual language, TP-VML, consists of visual elements formally related to the TP model. This is in contrast with BPMN, which is defined as a diagramming language with some formal elements mixed in, and other formal requirements expressed separately in the specification. Nonetheless, we are carefully studying the semantics of OMG’s BPMN2 / CMMN / DMN specifications to make sure we cover the necessary requirements, and use the same conceptual terminology as far as possible.