Previous: HL7 null flavors part 1
Null flavors – Objection #3: ontological problems
The following table shows the current HL7v3 null flavor values. A full version of the table appears in Grahame Grieve’s blog post.
|3||ASKU||asked but unknown|
On inspection of the meanings of these values, problems are apparent. If HL7 ‘nullFlavor’ is about ‘missing data’, then the values NI, UNK, ASKU, NAV, NASK, MSK and NA make sense, although a proper analysis requires at least putting them in a taxonomy.
However, the values PINF and NINF are not kinds of missing data, or data quality markers; they are pseudo-number values. Presumably they must occur somewhere in HL7 ecosystems as the value of some numerical variable or data field. If so, that’s fine, all that is needed is a way to represent them. There are various solutions to this, but a typical one would be to define the field in question as a String field, in which strings like “0”, “112345”, “NINF”, and “PINF” could occur. String processing would be needed to sort out the NINF/PINF from other values and convert it to something computable.
The type UNC presumably applies only to things that could possible be ‘encoded’ in the first place, and is mostly likely a mis-placed piece of meta-data from a type like ‘ED’ (EncapsulatedData) or similar.
The DER marker is apparently intended to mark data fields whose values are generated by the use of some formula applied to other data fields. This is a legitimate need, and occurs fairly often in health data. However, the very concept of ‘data derivation’ or ‘data synthesis’ is mutually exclusive with respect to ‘data acquisition’. Either a datum is captured or it is computed from previous captured (and possibly computed) data items. If we assume that ‘null flavour’ a la HL7 really means something like ‘data provenance’, DER might make sense in the taxonomy.
The QS type is related to DER. The definition given by HL7 is “The specific quantity is not known, but is known to be non-zero and is not specified because it makes up the bulk of the material: ‘Add 10mg of ingredient X, 50mg of ingredient Y, and sufficient quantity of water to 100mL.’ The null flavor would be used to express the quantity of water”. In other words, if there is a data field whose value is derivable by the subtraction of some other data values from a total, rather than direct measurement (i.e. acquisition), then it should be marked ‘QS’. This is clearly a subtype of DER. The obvious is question is: why define QS but no other subtypes of DER?
The TRC type clearly does not belong in a vocabulary of null flavours or data quality markers (or even ‘provenance types’). Its definition is “The content is greater than zero, but too small to be quantified”, which relates to laboratory result data generated by machines that detect traces e.g. of protein in urine, but don’t provide a numerical value. Now we need to be clear on what ‘data acquisition’ is about: it is about obtaining whatever value is available from the designated instrument, monitor, or person. If the machine says ‘trace’, then that is the value, loud and clear, and that is what needs to be recorded. There is no question of ‘imperfect data quality’, missing data or any other problem. This is business as usual. Note: technically speaking, ‘trace’ is a fuzzy data concept – it is the name of a quantity band from 0 to whatever is the first value the machine in question will register as a number. Typically such machines output all their values in bands, it just happens that the bands above ‘trace’ are designated with two numeric limits. A ‘trace’ result could easily be the value of a field for which data acquisition problems are occurring, in which case it should also be marked e.g. NI, NAV etc.
One could actually make similar arguments for ASKU (asked but unknown) and NASK (not asked, maybe the patient looked too drunk?), since the situation of asking a patient a question but not receiving an answer, or a usable answer, is business as usual in healthcare.
The two values INV and OTH are kinds of ‘out of bounds’ markers, indicating that the value that has been received (with or without problems) is outside some designated constraint, e.g. a normal range, or some other intended limits. INV is for numeric data, OTH is for coded data. Clearly either of these markers could occur as well as data quality marker on a given field. In fact, it would not be a stretch to imagine INV, NINF and NAV all occurring at once; similarly INV and QS.
In summary, there are at least the following things being conflated here:
- a NaN / infinity numerical concepts, for certain (probably rare) quantitative data items: PINF, NINF
- a derivation / synthesis indicator: DER, QS
- an encoding idea, for specific ‘encodable’ data: UNC
- a data quality idea: missing, unavailable etc: NI, ASKU, NASK, NAV, MSK
- a fuzzy data concept specific to laboratory results: TRC
- a constraint / out of range concept: INV and OTH
- a modelling applicability problem: NA
I don’t want to play down the complexity of the problem here, only point out that the current solution does not stand up as a usable ontology of types, due to containing at least 7 different categories of concept, and inconsistent population of each sub-category space (i.e. following the rule of ontologies that if you create one child of a given concept, you should normally create all the possible mutually exclusive children).
Is there a solution?
We clearly need a ‘data quality’ concept in health data. I am inclined to think it should restrict itself to a small number of values, i.e. item 4. above. This is what we did in openEHR. But how are the other needs on the list solved? Some ideas:
- infinity: this doesn’t exist as a possibility in openEHR quantitative data, so far no real requirement for this has been encountered;
- derivation / synthesis: at the level of data values / ELEMENTs, the need is recognised in archetypes, and possibly in data; at the moment, there is no current solution in openEHR; a possibility would be a new flag on ELEMENT;
- ‘encoded/raw’ – this might be a need for a specific meta-data item in some specific archetypes to do with the relevant kinds of data;
- data quality: in openEHR this is done with ELEMENT.null_flavour, which allows 4 values, namely: no information / unknown / masked / not applicable. I am pretty sure this list is not correct either, but at least it is simple;
- fuzzy data ‘names’: these are normally dealt with by ordinal types where each ‘value’ has a symbol/name, and a value to define its position with respect to other value bands. The openEHR DV_ORDINAL does a reasonable job on this, although fractional ‘ordinal’ values are now needed;
- the question of a data value being out of range with respect to some specific constraint is a difficult one; the value might be out of range with respect to al kinds of ranges, e.g. lab test result normal ranges, GUI input field values, archetype ranges and so on. It does not seem sensible to try to encode such ‘violations’ into the data, since you are then forced to write a lot more detail in to make sense of the violation; instead, these out of range errors should be dealt with in the place where they are generated;
- applicability – the typical example is something like ‘date of last period’ for a male patient. This kind of thing occurs when a generic GUI form is used for too many diverse types of patient, situation etc. In openEHR-land, it means the application of an archetype in appropriate to the patient or situation. It will undoubtedly always happen somewhere, and openEHR includes it in the null flavour list for this reason.
Of the above, I think only point 2 might need an addition to openEHR. I think a very simple null-flavour vocabulary is preferable to the HL7 one, because the latter includes concepts that are very general with others that are very specific, and also does not achieve mutual exclusion.
However, I am sure the general messiness of the universe means that no solution will work perfectly 100% of the time. We need to continue to pick up more use cases and analyse them clearly, both structurally and ontologically.