Evaluating e-health standards II – governance and commercial aspects


Following on from my post yesterday, Grahame Grieve commented that I had not dealt with issues of stability and commercial acceptability. I had not originally intended to do that, but on reflection, he is right – a standard that is going to survive and be worthy of wide-scale investment can’t be separated from its governance and commercial / legal situation. To address that, I updated the main article, here, and I have also revised the short list below. The shortest form of statement is provided by the blue headings on the left.

Necessary Characteristics for e-Health Standard Longevity and Investibility (v2)

  • Platform-friendly
    • Platform framework: Does the technology define overall elements of a platform into which recognisable specifics could be plugged, e.g. information models, content definitions, workflow definitions, APIs, code generation tools, etc? OR
    • Platform component: Does the technology define something that can be properly integrated into an existing platform definition?
  • Semantic Scalability
    • Domain Diversity: Does the technology provide a practical method of dealing with potentially massive clinical content diversity?
    • Local Variability: Does the technology provide a practical method of dealing with potentially massive localised variability?
    • Change over Time: Does the technology provide a practical method of dealing with ongoing change in information requirements due to, new science, -omics, drugs; new clinical protocols and methods; legislative changes; changing patient / consumer needs?
  • Implementability
    • Does the technology provide an automatable way for clinically complex models to be consumed by ‘normal developers’ to build ‘normal software’, including for the purpose of integrating with existing systems, data sources and sinks?
  • Utility
    • Data accessibility: Is the standard designed such that all data elements are easily computationally accessible at the finest granularity?
    • Query methodology: Does the technology provide a way to query fine-grained information based on models of content, not physical representation (physical DB schemas, specific XML doc schemas etc)?
  • Responsive Governance
    • Domain-led requirements: Are requirements statement and prioritisation led primarily by domain experts?
    • Industry-led roadmap: Can the roadmap of future releases, i.e. allocation of changes and timing of each release be influenced by industry implementers?
    • Release stability: Are releases over time are coherent with respect to each other in ways that enable economic upgrading of implementations (industry side) and smooth deployments of new versions (user / provider side)?
    • Responsive feedback mechanism: Does a visible and easy-to-use mechanism for reporting issues and problems with all levels of artefact, i.e. requirements, current release, reference implementation(s)?
    • Accountability: Is the governing organisation transparently accountable to key stakeholders for the outputs of the organisation?
  • Commercial Acceptability
    • Free core IP: Are the standard and its core computable artefacts free to use?
    • IP openness futureproof: Are there mechanisms to prevent the IP of the standard and related artefacts from being unilaterally privatised or otherwise made commercially unacceptable over time, including to small companies and user organisations?

Beyond the hype: evaluating e-health standards


A new e-health standard comes along every couple of years. In Gartner hype cycle terms, it starts out on the rise toward the ‘peak of inflated expectations’, then falls into the ‘trough of disillusionment’, before either dying or rising again over the ‘slope of enlightenment’ to a ‘plateau of productivity’. Most standards and e-health technologies (standards + their tools and artefacts) die before getting to this plateau. But why? What’s wrong with them? How can we pick a winner?

(Gartner Hype Cycle, from wikipedia)

The latest hyped e-health technology is of course HL7 FHIR – Fast Healthcare Interoperability Resources.

Read the rest of this entry »

ONC Hearing on the JASON Report – openEHR perspective


Recently I was asked to provide testimony to the ONC hearings on the JASON report, from an openEHR point of view. I did so on 31 July 2014. The JASON report is entitled “A Robust Health Data Infrastructure”. It surveys the problems of health data interoperability, and proposes the adoption of a unifying ‘software architecture’ as the solution. It also seems to imply a federated health record database. It’s primary assumption appears to be that APIs are the key element of the solution, and that their standardisation will fix the problem. Read the rest of this entry »

RDF for universal health data exchange? Correcting some basic misconceptions…


Something called the “Yosemite manifesto on RDF as a Universal Healthcare Exchange Language” was published in 2013 as the Group position statement of the Workshop on RDF as a Universal Healthcare Exchange Language held at the 2013 Semantic Technology and Business Conference, San Francisco. Can such grand claims be true?



Read the rest of this entry »

Why most health IT procurement fails and how to fix it


A strange thing happens in health IT solution procurement, and by extension government initiatives that seek to influence it. See if you can disagree with the following characterisation of health provider organisations as solution purchasers.

Think You’re Getting What You Want?

CIOs and CMOs have known for years if not decades that:

  • the data used inside their institution are their most important asset – either as a productive resource, or at least as an object of risk management. Most today would understand it as both.
  • the data used inside their institution is not all produced inside their institution – lab data often comes from external lab companies; they obtain or would like to obtain GP data such as medications lists, problem lists and so on;
  • their main vendor solution never supports the data richness actually required by clinicians – it is well known for example that most hospitals contain dozens if not hundreds of hidden specialist’s databases, often referred to as the ‘Access Database problem';
  • if they want to switch to a new vendor, the changeover costs related to the data alone are massive and the risks so great that this consideration alone paralyses them for years with the current ineffective solution;
  • no one vendor can produce all the functionality they require - even the biggest vendor has a ‘roadmap’ containing numerous features the provider wants today; no large procurement doesn’t involve significant and horribly expensive ‘customisation';
  • they cannot possibly afford to buy all the functionality they require in one go – if they wrote down the full wish list and then published a tender for it with an open budget, the resulting price tag would undoubtedly be beyond their means;
  • procurement of multiple ‘best-of-breed’ solutions for e.g. inpatient, ER, ambulatory etc, come with huge ongoing cost for data and workflow integration;
  • they cannot possibly afford logistically to deploy all the functionality they want in one go – the human costs and challenges of change management not to mention solution integration make this impossible;
  • asking for even the smallest change to the data schema or core functionality costs inordinate amounts of money and usually a long wait as well;
  • it would be nice if their IT department could have access to ‘their’ data, but of course they can’t, not without the vendor’s say so and price tag.

Read the rest of this entry »

What is an ‘open platform’?


The word ‘platform’ is starting to reach the same status as the word ‘internet’ – part of the bedrock, but many have no idea what it really is. In e-health particularly, ‘platform’ is often mixed up with ‘open source’, ‘APIs’ and ‘standards’ in ways that don’t always make sense. Regardless, public policy in the NHS, US ONC and in other countries is being formulated without necessarily a clear common understanding. I’m going to try to address some of the ambiguities here.

The key thing to understand about a platform is that it represents progress away from being locked-in to a monolith of fixed commitments, toward an open ecosystem. This is true both technologically and economically. Any platform operates in two environments: development, and deployment, and these need to be understood distinctly to understand how to use the platform concept properly.

platform_pic Read the rest of this entry »

Why clinical models are essential to big data


I attended HIMSS 2014 in the mammoth convention centre in Orlando 10 days ago, and went to a session on ‘Clinical Decision Support – is progress being made?’. Despite this being the dead Thursday of HIMSS, around 50 people showed up.

The presentation was done by Cory Tate (sen research dir. KLAS) and Adam Cherrington (research dir. KLAS). KLAS is the organisation that performs research into the shape of the health IT industry and publishes the results. So when they say something, it usually means that it’s actually statistically true across the US at least, rather than just a supposition of one person.

What they said was, in a nutshell was: progress is being made, there are Order set products (Elsevier, ProVation etc) and some surveillance products, e.g. infection control rule sets and so on. These have some nice features. Etc Etc. A discussion developed with the audience in which it became clear that both the presenters and others present identified the main blocker as the inability to connect the order sets and other CDS or analytics modules to the EMR products in use. Read the rest of this entry »


Get every new post delivered to your Inbox.

Join 150 other followers