openEHR-technical Digest, Vol 64, Issue 6

Bakke, Silje Ljosland silje.ljosland.bakke at
Tue Jun 6 13:34:36 EDT 2017

I agree and disagree. ☺

An EHR needs to be able to cope with all kinds of data, “questionnaire” or not. However I’m not so sure a modelling pattern that works for everything that could be labelled a “questionnaire” is achievable, or even useful.

Modelling patterns are sometimes extremely useful, for instance for facilitating modelling by non-clinicians or newbies, but sometimes they aren’t very practical. One of the problems is that clinical information in itself is messy, because healthcare information doesn’t follow nice semantic rules. Clinical modelling must above all be faithful to the way clinicians need to record and use data, not to a notion of semantically “pure” models.

Finding “sweet spots” by identifying patterns that are sensible, logical, and above else *work* for recording actual clinical information is often an excruciatingly slow process of trial and error, exemplified by the substance use summary EVALUATION and the physical examination CLUSTER patterns of modelling, which both had taken years of trial and error long before I got involved in them.

If we can find patterns across some kinds of “questionnaires”, like clinical scores, great! However, since there isn’t a standardised pattern for paper questionnaires, it’s not likely that it’s possible to make one for electronic questionnaires. Outside the RM/AOM, a generic pattern archetype for every questionnaire with variable levels of nesting, variable data points, etc isn’t possible, nor would it in my opinion be useful. It would put all the modelling load on the template modellers, which arguably would be more work than modelling the same structures as made-for-purpose archetypes.

Some rules of thumb have developed over time though:

1.       Model the score/assessment/questionnaire in the way that best represents the data

2.       Use the most commonly used name for identifying it

3.       Model them as OBSERVATION archetypes, unless they’re *clearly* attributes of f.ex. diagnoses, in which case they should be CLUSTERs (example: AO classification of fractures)

4.       Make sure to get references that support the chosen structure and wording into the archetypes

In my opinion this pragmatic approach is likely to capture the data correctly, while at the same time minimising overall modelling workload.


From: openEHR-technical [mailto:openehr-technical-bounces at] On Behalf Of GF
Sent: Tuesday, June 6, 2017 3:58 PM
To: For openEHR clinical discussions <openehr-clinical at>
Cc: Thomas Beale <openehr-technical at>
Subject: Re: openEHR-technical Digest, Vol 64, Issue 6

I agree.
‘questionnaire’ is many things, but not at the same time.

In any case any EHR needs to be able to cope with all kinds.
From ones with one or more qualitative results: such as the checklist
To the validated Score where individual results are aggregated in one total score.

It must be possible to create one pattern that can deal with all kinds.

Gerard   Freriks
+31 620347088
  gfrer at<mailto:gfrer at>

Kattensingel  20
2801 CA Gouda
the Netherlands

On 6 Jun 2017, at 14:46, Vebjørn Arntzen <varntzen at<mailto:varntzen at>> wrote:

Hi all

To me a "questionnaire" is a vague notion. There can be a lot of different "questionnaires" in health. From the GP's in Thomas's example to a Apgar score, to a clinical guideline and even a checklist. Those are all a set of "questions and answers", but the scope and use is totally different. In paper questionnaires we will find a mix of many, maybe all, of those, crammed into what the local practice have found to be useful (= "Frankenforms"). To try to put all of them into a generic questionnaire-archetype is of no use.

The GP questionnaire referred to by Thomas is in the quoted question about "ever had heart trouble" merely a help for the GP, and of little use for computation. But if it is supplemented by more specific questions, based on answers by the individual, then the final result can be "occasional arrhythmia with ventricular ectopics", which is a relevant information for later use and should be put into a relevant archetype. So is it a "questionnaire" or a guideline for the consultation? Not relevant IMO, it's the content, that's relevant.

Patients with haemophilia in Oslo university hospital are offered a questionnaire online to register whether they've had incidents of bleeding, what caused it, if they needed medications and if so, the batchnumber of the medication. This is followed up by the staff both for reporting of used medication, and for the patients next follow-up out-patient control or admission. Questionnaire or not? Not relevant – it's what the information is and what it is for, that is important. Find relevant archetypes for them, OBSERVATIONS or ADMIN-ENTRY for this, I guess.

Even checklists are a set of questions and answers. "Have you remembered to fill out the diagnosis?". "Is there a need to offer the patient help to deal with the cancer diagnosis?". Main thing is to analyze what the resulting answer is representing, and the use of it. Decision support? Clinically relevant? Merely a reminder? Put them into a template, using appropriate archetypes.

Regards, Vebjørn

Fra: openEHR-clinical [mailto:openehr-clinical-bounces at] På vegne av Thomas Beale
Sendt: 5. juni 2017 18:55
Til: For openEHR technical discussions; For openEHR clinical discussions
Emne: Re: openEHR-technical Digest, Vol 64, Issue 6

this has to be essentially correct, I think. If you think about it, scores (at least well designed ones) are things whose 'questions' have only known answers (think Apgar, GCS etc), each of which has objective criteria that can be provided as training to any basically competent person. When score / scale is captured at clinical point of care, any trained person should convert the observed reality (baby's heartrate, accident victim's eye movements etc) into the same value as any other such person. In theory, a robot could be built to generate such scores, assuming the appropriate sensors could be created.
With 'true' questionnaires, the questions can be nearly anything. For example, my local GP clinical has a first time patient questionnaire containing the question 'have you ever had heart trouble?'. It's pretty clear that many different answers are possible for the same physical facts (in my case, occasional arrhythmia with ventricular ectopics whose onset is caused by stress, caffeine etc; do I answer 'yes'? - maybe, since I had this diagnosed by the NHS, or maybe 'no', if I think they are only talking about heart attacks etc).
My understanding of questionnaires functionally is that they act as a rough (self-)classification / triage instrument to save time and resources of expensive professionals and/or tests.
There is some structural commonality among questionnaires, which is clearly different from scores and scales. One of them is the simple need to represent the text of the question within the model (i.e. archetype or template), whereas this is not usually necessary in models of scores, since the coded name of the item (e.g. Apgar 'heart rate') is understood by every clinician.
Whether there are different types of questionnaires semantically or otherwise, I don't know.
- thomas

On 05/06/2017 09:48, William Goossen wrote:
Hi Heather,

the key difference is that the assessment scales have a scientific validation, leading to clinimetric data, often for populations, but e.g. Apgar and Barthell are also reliable for individual follow up measures.

a simple question, answer, even with some total score, does usually not have such evidence base. I agree that in the data / semantic code representation in a detailed clinical model it is not different.

Thomas Beale
Principal, Ars Semantica<>
Consultant, ABD Team, Intermountain Healthcare<>
Management Board, Specifications Program Lead, openEHR Foundation<>
Chartered IT Professional Fellow, BCS, British Computer Society<>
Health IT blog<> | Culture blog<>
openEHR-clinical mailing list
openEHR-clinical at<mailto:openEHR-clinical at>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the openEHR-technical mailing list