e-Health standards – beyond the message mentality

thalori_monks_retreat

[a monk’s retreat near Thalori village]

I just spent a few days in Crete at an experts workshop of the European e-Standards project that aims to bridge well-known gaps in e-health standards and SDOs. I’ll comment on that effort in another post, for now I will just say thanks to Catherine Chronaki for the invitation, wonderful choice of venue and excellent workshop.

As is usual in these situations, being present in a beautiful place (Thalori village, southern Crete) with many interesting people (some old friends, others new acquaintances) and especially that vital ingredient: a world-class traditional band of musicians who played the paint off the walls of the Taverna until 3:30 am Saturday morning led to some new thoughts on standards (as well as a vastly improved appreciation of Cretan music).

In the e-health domain, the orthodox view in many standards organisations, is that ‘we can only standardise what is on the wire, because we have no control over what is in systems’. I would call this the ‘message mentality’, even if the physical communications have graduated over time from literal messages.

This certainly applies to some types of systems and in many circumstances in the past, but I would argue is no longer the main way to understand the interoperability problem. Today (and in some cases, for many years), many systems and environments know exactly what their semantic definitions are, and only need a way to make the content available. But even for systems whose users treat them as black boxes, the definition of semantics of data and behaviour that are to be exposed is still a job that needs to be done.

The ‘message’ approach has historically standardised both protocol and content in one go. This was useful for e.g. lab, ADT and other well-defined kinds of messages that don’t change much, and of course millions of them are still flowing today. But as a modern approach to standardisation, I would argue that the split is wrong, for two reasons.

Firstly, we have to assume that any physical technology or infrastructure is essentially transient with respect to the timelines of health data; what we want to preserve is the semantics. There is no permanency in modern physical data access technologies, and in fact the rate of change is increasing.

Secondly, the notion ‘system boundary’ is no longer well defined. Today’s paradigm is separated services. What we call ‘System A’ is really whatever services some solution makes visible to some client. It may be a numerous set of data services (e.g. EHR, demographics), knowledge services (terminology, drug DB, etc), as well as business application services (e.g. patient pathway, scheduling, CDS etc), but more importantly, the set of services will change over time, and the individual service definitions will also change over time. Thus, what we call the ‘interface’ of System A is not fixed in scope or fine-grained definition; we would be better to visualise a cloud of services. A further complicating factor is that the set of services visible may vary depending on who the client is, since different agreements can be made as to what services are exposed to whom. There is in fact no clean boundary, there are just variable visible subsets of a total set of services, changing over time.

Accordingly, the meaning of the term ‘system interoperability’ is not well defined, because we can no longer talk precisely of the ‘system’. Instead, we should use a notion I would call pervasive semantics: something that flows freely across numerous components acting as clients and suppliers.

For any given component (service, tool component etc), all we can really talk about is: what does it provide access to; what physical means of access does it offer, and under what rules and rights?

Entities like ‘messages’, physical ‘documents’ and web resources are really just artefacts of specific information-sharing and service-exposing technologies – they are not semantic entities, and should not be treated as such.

We therefore have three aspects that should be independently standardised:

  • semantics: the content, business level interactions / behaviour of a component that is to be made accessible to other components. This may be highly variable.
  • access rights: who can access what and when;
  • access technology: the technology of access, including concrete service technology (SOAP, REST, RPC etc), physical data representations (e.g. XML, binary, other), and communication protocols.

This separation is really nothing other than the normal protocol/payload split that network engineers have understood for 40 years, and it is close to what the OMG calls platform-independent model (PIM) and platform-specific model (PSM) in its model-driven architecture (MDA) approach. It also corresponds fairly clearly to the ISO RM/ODP view of the world.

Essentially I am proposing that the payload / protocol separation be reflected directly in standardisation, as it is in most other domains. Indeed, one could ask: what, if anything is special about the access mechanisms for e-health anyway? I will leave this question alone here, on the basis that there probably are enough details specific to health (e.g. data types, privacy, use of computable knowledge) to justify health domain standards for access technology. But I would still recommend that we look at current developments elsewhere for inspiration on solving these problems more generically if possible.

Next, I would say that the semantics need to be standardised in a completely different way from the physical access mechanisms. The latter usually require ‘hard standardisation’, and need to exist in as few competing variants as possible, since it is a question of physically moving data around through services and databases – the slightest mismatch in any detail between two components can destroy communication. Accordingly, the style of standards development needs to be engineering oriented and make heavy use of software-based testing, release management, conformance testing and so on.

The standardisation of semantics on the other hand needs to be done differently for various reasons:

  • the principal authors of the original semantics (clinical content, workflows, guidelines etc) need to be domain experts; this consideration leads naturally to a ‘domain crowd-sourcing’ rather than a centralised approach;
  • in order to formally express the semantics, dedicated formalisms and tools are needed that enable:
    • the domain expert authors to create definitions in ways they cognitively relate to – in other words, not be forced to directly use unsuitable IT formalisms like UML;
    • the created definitions to be represented and persisted in ways that enable them to be computed with in the concrete infrastructure;
  • the sheer number of semantic entities, the level of local and specialist variability, and the rate of change all dictate a more lightweight standards process than the ‘hard standardisation’ needed for concrete access mechanisms.

But how can decentralised, crowd-sourced standardisation work, and still generate anything called a ‘standard’? One clue is in the second point above: there is something that is centralised: the representational formalisms. Ideally the tooling is as well. The formalisms will be part of the work of the infrastructure standardisation effort. However this on its own doesn’t guarantee the intended outcomes. For example, how do we prevent a dozen separate groups uselessly re-inventing the blood pressure archetype in a dozen different variations?

The answer is that it depends on how we understand ‘crowd-sourcing’. It needs to be a virtual design community concept where there is not only artefact creation, but organised conversations and reviews about the artefacts. There has to be something that functions as an artefact library, and ability to find artefacts for a need (discovery) and lastly, governance to ensure quality. It is the creation of this virtual community, with its tooling, communication mechanisms, libraries, and governance that should be the basis of semantics standardisation.

In summary, I would say that we are no longer in the business of building messages between black boxes, but of enabling semantic content to flow between components in an open platform environment.

Secondly, the physical technology of access and representation is always changing. This means that clinical semantics cannot be directly standardised inside standards for physical communication, interaction or representation.

Thirdly, for systems and architectures that know their own semantics and want a way to expose them to clients, interoperability standards that enforce their own definitions of semantics will be problematic. These same standards may be just what is needed in other circumstances where no ‘system’ semantic definitions are available of course. But we need the flexibility to use protocols and infrastructure with a free choice of semantic content.

There are those who dream that a single message or other technical standard can make the whole world interoperable. But global interoperability, if it is even possible, will never be achieved like this. It can only possibly be achieved by communities working on the semantics they know about, and slowly, over many years and decades, understanding each others’ artefacts and working together to integrate them. And they can’t do this if they are caught up in 5 year technology replacement cycles.

Advertisements

About wolandscat

I work on semantic architectures for interoperability of information systems. Much of my time is spent studying data, ideas, and knowledge, and using methods from philosophy, particularly ontology and epistemology.
This entry was posted in Computing, FHIR, Health Informatics, openehr, standards and tagged , , , , , , . Bookmark the permalink.

6 Responses to e-Health standards – beyond the message mentality

  1. You poor bastard.

    I will never use the phrase “like herding cats” again. From now on, I shall say “like attempting to standardise medical record-keeping and communications”.

  2. Tony Shannon says:

    Thanks Tom,
    Important post..

    Just to reinforce your first and second summary points…
    “we are no longer in the business of building messages between black boxes, but of enabling semantic content to flow between components in an open platform environment.”

    “Secondly, the physical technology of access and representation is always changing. This means that clinical semantics cannot be directly standardised inside standards for physical communication, interaction or representation.”

    What we are after is an open platform approach that moves beyond thinking about standards in wiring and messaging , but rather considers the critical fit between the clinical process (inc their key generic patterns) and the related information that needs to be supported.. the clinical semantics as you put it.

    It was that exploration of clinical process/information architecture that led me to identify openEHR as such a good fit for the clinical domain..
    More related postings here.
    http://frectal.com/book/healthcare-change-the-way-forward/healthcare-chasing-the-right-fit-between-process-and-it/

    Thanks
    Tony

  3. bertverhees says:

    Access features should use a standardized technology which is commonly used between systems.
    But I would not favor to force to use a specific technology.
    I think that is one of the mistakes from FHIR. Choosing a technology for access means stopping innovation on access, and if this is standardized over the ISO bureaucracy, we are stuck on it for long time, even if there will be newer, better versions of the protocol, even it is becomes unsafe in some situations.
    And also, accessibility is up to the system owner. If the owner of a system does not want to communicate, then it is up to him and his business-models.

    Access Rights should also not be standardized in the way as described (“access rights: who can access what and when”), standardization of access rights blocks organizational innovations, and hinders acceptability of a standard in a country when there are conflicts with the law or local decency-conceptions. There can be, however, neutral standards be enforced, like RBAC, ABAC, etc.

    There is a difference between formulating a standard and enforcing a standard. But if access rights or access technologies are connected to a system in a ISO standardized definition, it is for governmental organizations easy to enforce those technologies. I don’t think that that makes me happy.

    The part that should be standardized, because it really helps better and open systems is semantic interoperability and standardized semantically defined data-access. Like SNOMED does for terms, there should be an organization which defines data-points which you described in another blog, some time ago. How the system solves this internally is up to the system-architecture, but systems must have open and standardized definitions to access these data-points, similar like this exists in semantically structured messages.

    In that other blog, you mention the idea of semantic scalability, but in a negative way. You say that it will be impossible to define those data-points, because it are too many, and that therefore medical systems will never reach that point of internal semantic architecture. I don’t know if that is true.
    You say the number, if I remember well, of 36,000 datapoints, make it 50,000.
    If someone described SNOMED 20 years ago, we would have laughed at that person, and now we have it. So I don’t think it is impossible.

    The good news is that we don’t need to have defined them all to use them. It can be a phased effort. But we need governments to pull the wagon.

    • wolandscat says:

      Re: access rights – yes the idea is that the methods of access, also things like roles and identification methods be standard, so that at least there is a possibility of two systems mutually understanding an RBAC scheme.

  4. bertverhees says:

    I understand this can be useful if two health-care institutions have a similar organization-scheme.
    Then you are able to classify information for specific roles. Maybe this is so in the UK, where health-care-control is centralized, but there are also more liberal forms of health-care.

    The problem is that when there are functions/roles in a country or hospital which are incompatible with the standardized scheme, then the scheme is not anymore useful.
    For example, in the Netherlands some GP tasks are delegated to nurses, in some other countries this does not exist, how to classify such information?

    I think it will be hard to define a role/authorization-scheme which fits almost everywhere and still remain useful for its purpose to relay information to the intentioned roles (because they may not exist in the same extent as in the standard).
    The purpose of this standard will be to improve health-care information systems, not health-care itself. It is like putting the cart before the horse.

  5. Pingback: Weekly Australian Health IT Links – April 11, 2016 – Auto Article - Weekly Australian Health IT Links - April 11, 2016

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s