Why the platform will replace today’s interoperability standards in healthcare

For decades, most of us working in health informatics and e-health have lived on the assumption that ‘interoperability’ is one of the main things we are trying to achieve, and that it is the most important because the lack of it blocks progress on nearly every other priority. In the last decade, the gold version of interoperability has become ‘semantic interoperability’, a fabled Nirvana in which today’s sewers of recalcitrant proprietary data are magically transformed into a sea of pure Evian whose meaningful molecules will be ‘understood’ by drooling next generation apps that will instantly discover what is wrong with each of us, and tell us how to fix it.

I have no doubt that exactly the same delusions afflict at least some other domains, perhaps less those that simply cannot afford systems not to talk the same language, such as mobile telephony and industrial control.

This Nirvana will never be reached by the current methods. Some would say it is an illusion and can never be reached by any method, but it is clear that they are wrong. Four hundred years ago, if you wanted to travel in Europe (in an intelligent fashion at least), you needed to learn some basics in numerous languages. About a century ago, a modicum of French, German, English and any Slavic language would get you very far indeed, and today, you need just one language to order a latte, go to the theatre or buy a train ticket: English. But go and meet those in their 20s all around Europe, not to mention middle class India, and half of Africa – all those places where English is a second or third language – and you discover they can do far more than order a coffee; they have internalised English and can have a deep discussion about Game of Thrones season 4, or the senselessness of Brexit. There is now a common English all around the world, owned by nobody.

In the more rarefied atmosphere of serious science and philosophy, the single language had of course already been achieved many hundreds of years earlier – Latin. This was the language in which Henry VIII wrote  Assertio Septem Sacramentorum (“Defence of the Seven Sacraments”) in 1521, the language in which Copernicus, Newton, Liebniz and every other learned European communicated, and in which Linnaeus named everything he saw in the natural world. In other contexts, French was another common language, at various times permitting one to use a single language in much of Africa, be understood by the polite society of St Petersburg and to discuss Asian art in the far east with educated Vietnamese.

In each of these contexts, human communication, at least for a very large core set of concepts became its own thing, no longer owned by any culture, just as English is no longer owned by the Anglo countries today.

The problem isn’t that all IT systems and products in health can never in principle speak the same language, it is that trying to achieve this state of affairs by thousands of disconnected developers trying to make connectors based on ad hoc messages between those systems is a fundamentally flawed approach, at least in the long term. In the short term, of course it has utility, and there is no argument to say that lab result messages between a pathology laboratory and a GP clinic are not useful. However, even lab messages are different in many countries: different coding systems, structures and transactional semantics abound. Not one of the lab messages used today in Brazil would work in Germany, and Germany’s do not work in Norway.

According to the language metaphor, today’s health IT systems with their attempts at interoperability look as follows. They secretly know a lot but can hardly say anything to each other.

The whole mentality of ‘interoperability’ is that if we just do a better job on standards, we can get all those systems and products to talk to each other. But, other than for ordering a coffee or a train ticket, we can’t, unless those systems and products fundamentally change, and become clients of a common language, rather than obstacles to its achievement. In our domain, this needs to be a common health computing language.

This state of affairs can only be reached by separating out the basic components in those products that create and manage data – components that create the utterances of these systems – from those products and standardising them across the whole industry into common platform components. An analogy is the way TCP/IP is standard across the entire internet, and most private networks as well: no product or vendor owns TCP/IP, but they all speak it. And remember: TCP/IP was for a long time only a de facto standard, not a de jure one. (The de jure one, ISO’s OSI 7 layer standard made for pretty diagrams in my networking textbooks in the late 80s, and that’s where it stayed).

What does health data consist of? It consists of statements that can be recorded in the course of clinical care, planning, research and other activities, e.g. public health and reporting. These range from the mundane, such as vital signs observations to semantically rich artefacts such as care pathways, care process workflow definitions and clinical trial data-sets. Today, nearly all commercial EMR systems and clinical applications – including open source ones – have their own private version of all of this, with a few exceptions. According to the interoperability religion, the standards that the faithful work so hard to perfect in ISO, HL7, CEN, ASTM and other such places (I was one for many years) will provide the means for these thousands of products to talk to each other.

This is not without some truth; but following the European languages analogy, all we are really achieving is that those systems can order coffee or a train ticket from each other. They will never internalise a rich common language of healthcare, because each already has its own rich private language, and the mounting numbers of interoperability connectors are mostly mutually incoherent at the semantic level.

To reach a situation where all health IT products understand vital signs observations, care plans and clinical trial data-sets in the same way, they must give up their private languages (or reserve them for Christmas dinners with family and friends) and learn an English of healthcare computing. In other words, we need to forget the idea that they have their own data. Or more precisely, that they have their own language in which to create their data.

Instead, commercial products (including open source) need to work on the principal that they function as applications and intelligence, over a platform that manages their data, and thus, embodies the common language.

This is not a completely new idea of course: some people in the terminology domain have claimed that if we could only agree a single terminology, the battle would be won. This is not really true, but it’s not entirely false either. If terminology were to be unified and ‘completed’, it would be a like a dictionary plus minimal grammar book, in the language analogy. Terminology is already something not owned by health IT product vendors, so that’s a good start as well. But having a dictionary and grammar rules doesn’t mean you know how to  say anything useful: you need a rich description of the aspects of the real world that you want to talk about. If you want to order a coffee, you need to know what ‘coffee’ is, and that it is something you can ‘order’. Terminology might be able to say that a coffee is-a beverage is-a food, but ‘ordering’ is a more sophisticated concept and terminologies quickly run out of steam. If you want to say ‘madame, I need to have a 3 day stop in Lisbon on the way from London to Rio’, you both need the same conceptual schema of air travel in Europe.

In philosophy, these descriptions of reality are ontologies, and today they may be realised as formal ontologies, but also in other ways: well designed software class models, business rules, domain content models (archetypes, CEMs, CIMI models in e-health) and ‘templates’ for many kinds of instantiable entity (e.g. data sets).

The vendors of some of today’s healthcare products would no doubt be offended or even outraged by these claims. But I doubt very much that any of them does not agree with them. The reason for that is that even the biggest, most expensive solutions (Hospital Information Systems, EMRs etc) cannot today hope to cover all the kinds of activity or data sources in the real world of clinical healthcare or research, even if the opposite was once the case. Today there are tens of thousands of e-health products and solutions, not just dozens, and the cacophony of non-communication is no longer sustainable or solvable by teaching each one to speak to others via some minimal standard messages.

Indeed, the inability to obtain data for a given patient from all (or even most) of the other products used during other encounters of the patient with the health system is now impeding the function of the incumbent large products. Some of the ‘big 6’ EMR vendors have already realised this, some continue to resist, perhaps due to lack of an attractive business model.

Getting from where we are, which is thousands of babel-ing products each talking past each other (succeeding only in ordering the odd coffee, probably delivered cold) and the prospect of interminable building of separate data pipes for each specific communication, to a situation akin to everyone having learned English – properly – would seem to require a revolution. And there will be a revolution, because every one of those products wants to exchange rich health data to do its business, but cannot, other than with a few tailored friendly systems, and on a narrow range of matters.

To change metaphors, it is if we have realised that there are 50 different railway track gauges, trains that can’t go anywhere except their own limited track, and an explosion of customers demanding to be able to travel across the whole country.

The main reason there will be a revolution is that there is now a vast unmet need for the new applications and devices we see appearing every day, not to mention new generations of analytics tools, to connect to a common stream of patient- or cohort-centric health data. Not solving the problem is blocking the future. Solving it the old way – an endless production of ad hoc messages, documents, or ‘profiled resources’ – will be a gargantuan effort that will exacerbate the already unsustainable healthcare spending in the OECD countries, particularly the US, and it will be ten times slower. But why would we do this, when a much better and more cost-effective future is available?

A second very strong driver for the separation of patient healthcare data from vendor products is the growing realisation that patient data must be a holistic entity owned by the patient, or a by a trusted third party on behalf of the patient, rather than being a collection of disjointed data heaps jealously guarded by separate companies.

The end-state situation we seek is a platform the implements the language, concepts and formalisms of healthcare, just as English does for modern society, TCP/IP does for the internet, and 4’8″ rail gauge does for train travel. This is what the future looks like:

This future corresponds to a major re-arrangement of the health IT vendor industry. Rather than each company having its own ‘modelling’ groups and data architecture, specialists will work in companies that make platform components, or else specify the platform. This will save billions, and wipe out most interoperability problems. The designers and developers who work on applications today will work in a new way: with a common language and model library built into a standard platform that they simply instantiate and use, rather than some private database model that looks nothing like anything anyone else has. The bigger companies might even start innovating again. A myriad of smaller companies that are currently paralysed by the prospect of building their own health databases, or else implementing endless ad hoc messages and documents just to order a coffee, will plug into the common platform at minimal cost, and be liberated to concentrate on their innovation.

A new category of work will become prevalent: development of domain models, by domain professionals. Some of this exists today, in the form of authoring of care pathways, terminologies, clinical archetypes and biomedical ontologies.

This separation of common semantics from competitive functionality will enable the health language formalisms and models to be developed properly and taught pedagogically, rather than being represented as the thousands of mutually incoherent data dumps we see inside today’s products.

Now, in the world of politics, I am not big on revolutions. They are usually destructive, promising everything and resulting in terror. We don’t want that in healthcare IT! Neither am I against corporations or commercial activity, at all. But I am for economic efficiency, innovation, and making sure business serves humanity, not the other way around. And the healthcare IT industry can do so much better for the patients and professionals it serves.

So I should be clear that I am using the word revolution mainly metaphorically; what I believe we will see is more like a phase change in the physics sense. Remember creating a supersaturated solution of alum in chemistry class? Slow, distinct nucleations, followed by rapid conversion of liquid to crystal. A beautiful sight.


About wolandscat

I work on semantic architectures for interoperability of information systems. Much of my time is spent studying data, ideas, and knowledge, and using methods from philosophy, particularly ontology and epistemology.
This entry was posted in Computing, FHIR, Health Informatics, openehr, standards and tagged , , , , , . Bookmark the permalink.

One Response to Why the platform will replace today’s interoperability standards in healthcare

  1. bertverhees says:

    What makes it so interesting to talk with humans from other countries, that we learn for that purpose a second or even third language, English as you say.

    It is not for ordering coffee, or buying a train-ticket. We let our phone do the talking, there are smart apps which can translate standard messages without errors. While using them you learn, and next time in Rome you know how to order a coffee without the app.

    When we want to talk with foreigners, we want to talk about art, television, about politics, about philosophy. We want to talk about things which are not simple facts but which need good explanation but also good listeners who understand the feelings and thoughts we want to share. That part is hard.

    English is not my first language, not even my second. I spoke a local dialect, Dutch and German before I learned English. This is common for people from the region where I was raised. I think German people find better understanding of their language in the Netherlands than English people do. But German people can never assume that someone in the Netherlands speaks German, there might be some old sensitivities, but as soon as they start talking something that sounds from far a bit like Dutch or kind of English, the Dutch immediately start talking kind of German, sometimes even understandable. The German language is the most complex of the western languages, even many Germans have problems with it. German folks rarely speak English. French even less. In France, when you go to a GP, chances are that he/she does not speak English, or did not practise it for 20 years. They don’t practise it from tv, the Game of Thrones are dubbed. Maybe the French even dubbed Allô Allô, that would be really funny: “Une blague allemande est pas pour rire” ( “A German joke is no laughing matter” https://www.youtube.com/watch?v=g4ik9vUkj-Y )

    Same in Spain (they regard their language as a world-language). And don’t mention the Belgium’s having three official languages in their small country. They have a language battle. French speakings are taking over villages, and they demand their language to become official language in a village where 30 years ago Dutch was the official language, which means that in school, French will be spoken. They hate each other, except when they do good at the FIFA World Cup. Even the police was not able to work together with departments in other language-areas.

    How good would it be when a foreign GP had a SNOMED dictionary and you could explain in SNOMEDian what your problem is. Many conversations at the GP-office can be coded.

    So, my point, language has a lot to do with history, with power, with culture, with economics and many other things. You still need a lot of languages when you travel through Europe, and you want to talk with people who have not been to university, recently.

    You can love a language, or hate it, or find it easy or difficult.

    How much different is this in science. People from all over the world work together to do inventions on every thinkable subject, everyone in its own language ordering the coffee, but communicating complex subjects without problem all around the world. Why should that be different for medical science?

    I have been last half year a close witness of long taking healthcare-process. Every clinical step (it were hundreds) that was taken could be coded in SNOMED. So there are no cultural clashes, there are no lingual dominances, everything is facts.

    Maybe a good example would be needed to be convincing. Could you please add that? Maybe I feel sorry because the example is so stupid simple, that I did not come to it. Then I will blame the heat.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s