
[Image: from CatSalut website]
A slide on the future of HIT, from the openEHR conference hosted by the Catalan Health System (CatSalut), 06 June 2023, Barcelona.
WHAT
- knowledge-based – computational representation of foundational knowledge: ontologies, terminology
- model-based – computational representation of operational knowledge: information and process definitions
- process-based – patient care pathway as first order computational entity: derived from local or published computable guidelines
HOW
- Take all hidden semantics out of the software and DB schemas and represent them as first-order entities, created by domain experts, not IT people
- Realised in a services-based open platform, based on terminology, models, model-driven software, and care pathway execution
- Used to create a system for representing and tracking care pathways, and at each task and decision point, we have a transparent user/computer interaction – not a pile of hidden ‘business logic’
- Voice interaction – voice + models allows for constrained vocabularies (efficient for voice) and goal-oriented navigation rather (user-driven) not form-based navigation (developer-driven); documentation is created ‘on the way’
- Machine learning – use of AI created via supervised training of blank LLMs to perform patient-specific reasoning on the data
RESULT
Signs of success:
- Engineering: we get rid of applications – evolve to ‘task-oriented IT’
- Administrative: we get rid of ‘referrals’ – evolve to ‘straight-through care’
- Clinical: single-source-of-truth medications list, no more ‘med rec’ – evolve from institutional copies to a true ‘digital twin’
- Patient records: we get rid of separate clinical documenting – ‘documenting while doing’
- => a patient-centric care experience.
Yeah, I’m not sure that last step of the “How” page is such a good idea (but I’m certain that it’ll be done). I doubt that there is enough data, or enough hours of sufficiently high quality supervision to create an “LLM” that won’t just make up stuff to suit its fancy. And I’d prefer not to be doctored on the basis of LLVM “hallucinations”, if that’s possible.
Stephen Wolfram has a nice article on how the GPTs work, why the transformer model approach appears to be more successful than simple n-graphs. https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
They’ve already trained them on everything ever written. To a first order approximation there can only be tiny incremental improvement from here (short of the development of new techniques and bootstrapping, which I admit seems likely).
The other parts of the presentation seem excellent though.
LLM, not LLVM: the latter is part of a slightly different field of endeavor.
There’s enough data to train a blank LLM for many specific jobs in medicine. My concern is whether language models are the right paradigm to avoid hallucinations and/or to even achieve comprehensiveness. But as you say, it will be done, so we are in the position of having to learn more about LLM limitations and how we can manage them.