clinical modelling

Know your data?

Recently I was reminded of some work we did a number of years ago. It involved a large research database, painstakingly collected over 20 years. The data was defined across a number of specialisations within a single clinical domain and represented in 83 data dictionaries stored in an Excel spreadsheet.

Data was collected based on a series of questionnaires, and we were told that successive data custodians had, true to human nature, made slight tweaks and updates to the questionnaires on multiple occasions. The data collected was actually evolving!

The only way to view the data was to open each of the 83 spreadsheets, painstakingly, one by one.

We were engaged to create archetypes to represent both the legacy data and the data that the research organisation wished to standardise to take forward.

So the activity of converting these data dictionaries - firstly to archetypes for each clinical concept, and then representing each data dictionary as a template - resulting in considerable insight into the quality and scope of the data that hadn't been available previously.

For example, the mind map below is an aggregation of the various ways that questions were asked about the topic of smoking.

SmokingInterestingly, what it showed was that no one individual in the organisation had full oversight of the detailed data in all of the data dictionaries.The development of the archetypes effectively provided a cross section of the data focusing on commonality at a clinical concept level and revealed insights into the whole data collection that was a major surprise to the research organisation. It triggered an internal review and major revision of their data.

Some of the issues apparent in this mind map are:

  • A number of questions have been asked in slightly different ways, but with slight semantic variation, thus creating the old 'apples' vs 'pears' problem when all we wanted was a basket of apples;
  • Often the data is abstracted and recorded in categories, rather than recording the actual, valuable raw data which could be used for multiple purposes, not just he purpose of the rigid categories;
  • Some questions have 'munged' two questions together with a single True/False answer, resulting in somewhat ambiguous data; and
  • Some questions are based on fixed intervals of time.

No doubt you will see other issues or have more variations of your own you could share in your systems.

And we have repeatedly seen a number of our clients undergo this same process, where archetypes help to reveal issues with enormously valuable data that had previously been obscured by spreadsheets and the like. The creation of archetypes and re-use of archetypes as consistent patterns for clinical content has an enormous positive impact on the quality of data that is subsequently collected.

And while harmonisation and pattern re-use within one organisation or project can be hard enough, standardisation between organisations or regions or national programs or even internationally has further challenges. It may take a while to achieve broader harmonisation but the benefits of interoperability will be palpable when we get there.

In the meantime the archetypes are a great way to trigger the necessary conversations between the clinicians, domain experts, organisations, vendors and other interested parties - getting a handle on our data is a human issue that needs dialogue and collaboration to solve.

Archetypes are a great way to get a handle on our data.

Archetypes: health data bridges

What do we want for our health data - silos of information models for different purposes or ones that bridge multiple use cases? From a series of emails shared on the HL7 Patient Care email list in the past few days...

Grahame Grieve (FHIR, HL7):

"Heather, you need to keep in mind the difference between FHIR and clinical models: it's not our business to say not to exchange data that people do have because some user in an edge case might not understand it. We define an exchange standard, not a clinical UI standard..."

and

"Heather, do not lose sight of the difference between a clinical standard for what care/records should be, and FHIR, which is an IT standard for how care records are."

My response:

"...I am concerned about developing another standard that you state clearly is only designed for exchange and not for what care records should be. If we are not designing to try to harmonise data requirements for health information exchange, how clinical care records are and how clinical care records should be, then we are building siloes of data structures again, that will require mappings and transforms ad infinitum. I’d hate to see us end up with a standard for exchange that can’t be implemented for persistence"...

If we end up with models for exchange, models representing current data in systems (whether or not they represent good clinical practice and models that are regarded as the roadmap for good data, then what have we got? Three sets of data models that perpetuate the nightmare of non-interoperability.

Our openEHR archetypes are attempting to bridge all of these. Use them in whatever context you choose - messages, document exchange, EHR persistence, CDS, secondary use, aggregation and analysis, querying etc. The 'secret sauce' is the use of a second layer of modelling - the template, that allows the correct expression of the archetype appropriate for the context of use.

Mappings and transformations are acceptable where we don't have any choice, especially with legacy data, but they open us up to vulnerabilities from errors, misinterpretation and ambiguity, concerns re data integrity and possible overt data loss. Given the choice, lets work towards creating high quality data that can be re-used in multiple contexts safely.

The Questionnaire Challenge

How should we model questionnaires in our health data? This is something that @ianmcnicoll and I have grappled with for years. We have reached a conclusion in recent times, and our approach, perhaps rather controversially, is not to model them! Yes, you read me right - as a general principle, don't archetype questionnaires.

Now of course there will be some situations where there are standardised and ubiquitous questionnaires and perhaps it will be reasonable to lock these down as fixed data elements and value sets in an archetype, and maybe even govern them within a CKM environment.

But... Consider the number of questionnaires out 'in the wild' at any point in time. Should each of these be archetyped?

Let's think it through. If there are 5000 questionnaires in the world (and clearly there are way more questionnaires than that out in the health ecosystem) then we would need a corresponding whopping 5000 archetypes. And, as we all know, no two questionnaires will be alike even if they have a common parent document - it is always human nature to 'tweak' each one for local use because 'our situation' is unique. It's just the way it is. The consequences are that any data captured using the myriad of archetypes, even though they may be similar, the data will not be interoperable. We will have a huge number of archetypes with a huge variation in content and intent - not a lot of upside from my point of view.

Another alternative could be to define a generic archetype pattern for a questionnaire, and re-use that. In fact we tried this back in 2007 with our work in the NHS - you can see a reasonable example here. The resulting questionnaire pattern is pretty simple and relies on using the templating layer to document the questions, and either templating or use of a terminology subset to record the answers. The equivalent FHIR resource appears similar in principle and intended use. This kind of pattern provides a common framework for a questionnaire but really doesn't give us a lot more interoperability for questionnaire data - the actual questionnaire content will vary enormously and the results can still be chaotic.

So, still somewhat clunky and awkward - not an elegant solution at all.

Then we got to thinking: Do we always need to actually record the questions and their answers in the EHR? This is a critical question. Sometimes the answer is yes, but most often I think we will find that we don't need to record the actual question and (often) check box response. What we really want to record is the outcome, the real health information meaning.

Think about the practical aspects of this...

Clinicians have a systematic questioning process for history taking - we all have a similar pattern but ask subtly different questions - resulting in zillions of permutations and combinations and levels of granularity. Questions could range from: "Have you had any abdominal symptoms?" to "Have you had any nausea, vomiting, reflux, abdominal pain, diarrhoea, constipation etc" to whatever combination is relevant for a given clinician in a given clinical situation. Many will ask the same kind of question slightly differently. Every resulting questionnaire will be slightly different.

And what do they record? They don't record each question and corresponding Yes/No answers in their paper health records. They record the positive responses or the relevant negatives, and/or maybe a quick note that there were no positive responses to systematic questioning about current symptoms, problems, past history, family history etc.

So we need to ask ourselves: Do we need to record the exact question, the potential alternative answers and the actual answer?

If the answer is yes, then it is a very good reason to lock in the questionnaire in an archetype, or at least a template.

If the answer is no, then what is the best way to record the relevant data - the relevant positives and the relevant negatives. For example, with abdominal pain - record the details about their diarrhoea and colicky pain in the right lower abdomen, PLUS that the patient has NEVER had an Appendicectomy.

So I'm suggesting that we need to record the positive presence of something identified in the questionnaire, for example a symptom, diagnosis or previous procedure, and the positive absence of related things. In this case, record the details about the diarrhoea and abdominal pain as the positive presence of symptoms using the Symptom archetype and positive exclusion of a previous Appendicectomy procedure in the Exclusion of Procedure archetype. We don't need to record the actual question and corresponding Yes/No answer.

So our current approach in Ocean is to use the software UI as the means to display the checklist or questionnaire, but only record in the electronic health record any relevant answers - both the positive presence of symptoms, signs, diagnoses, procedures and tests etc, and also the positive exclusion of any of these things - all using standardised archetypes.

Lets face it, it is not often that we ever go back to look at the raw questionnaire data again. So now we tend not to record the raw data (with some exceptions, where it may be required or useful) but use a transform so that a patient's or clinician's positive tick in a box for 'Past History of Epilepsy?' will be converted into a positive statement of 'Epilepsy' within an EHR, using the Diagnosis archetype. Any additional 'other' free text or 'details' or 'date of diagnosis' from the questionnaire can be captured using other relevant data fields for the Diagnosis archetype. The benefit from this approach is that this data can then be potentially re-used into the future as part of a comprehensive Problem List, not just buried as a ticked check box within a questionnaire from years ago, perhaps never to see the light of day ever again.

Consider the questionnaire as what it really is - just a clinical communication tool, a checklist. It is absolutely not the best means to record, persist and re-use good quality health data. What we really want to record in a consistent way are those critical pieces of health information in a formal archetype so that the data can be utilised for long term health records, decision support, exchange or analysis. Recording the check boxes answers from a questionnaire don't really do that job!

Stop the #healthIT 'religious' wars

I seemed to trigger an interesting discussion over on Grahame Grieve's blog when I posted the following question to him on Twitter. https://twitter.com/omowizard/status/432869451739185152

Grahame responded a la blog, but not really answering the essence of my question.

However, below is a great comment copied directly from the discussion thread, and submitted by Koray Atalag (from openEHR NZ). In it he has succeeded in expanding my twitter-concise thoughts rather eloquently:

"Hi guys,

I’m interested in getting these two awesome formalisms as close as possible. In what way – have no clue at the moment apart from existing mutual understanding and sympathy at a personal level. Well that’s where things start I guess ;) Ed Hammond once said, as he was visiting us in Auckland last year, convergence in standards is a must, mappings just become complex and hard to maintain. I kind of agree with that. Where I’d like to see openEHR and FHIR is really dead simple – Share what they are the best. Here is how I see things (and apologise if you find too simplistic): 1) Archetypes are THE way to model clinical information – anyone argue with that? 2) FHIR IS the way to exchange health information over the wire; modern, non-document/message oriented, heaps of interest from vendors etc. 3) openEHR’s Model Driven Development methodology can be used to create very flexible and highly maintainable health information systems. So this is a different territory that FHIR covers. Inside systems vs. Outside. A growing number of vendors have adopted this innovative approach but it’d be dumb to expect to have any significant dominance over the next decade or so.

So why not use openEHR’s modelling methodology and existing investment which includes thousands of expert clinicians’ time AND feed into FHIR Resource development – I’d assume Archetypes will still retain lots of granularity and the challenge would be to decide which fall under the 80%. I take it that this proportion thing is not mathematical but a commonsense thingy.

As with anything in life there is not one perfect way of doing health IT; but I feel that FHIR based health information exchange with propriety (and from the looks increasingly monolithic) large back-end HIS and openEHR based health information systems working with rich and changeable clinical data (note some Big Data flavour here ;)will prevail in this rapidly changing environment.

So I’m interested – probably mapping as a starting step but without losing time we need to start working together.

The non-brainer benefits will be: 1) FHIR can leverage good content – I tend to think a number of Published or Under Review type archetypes have been in use in real life for a while and that’s probably what Heather was referring to by clinical validation. A formal clinical validation is a huge undertaking and absolutely unnecessary I guess unless you’re programming the Mars Colonisation Flight health information systems!

2) openEHR can learn from FHIR experience and use it as the means to exchange information (I haven’t yet seen EHR Extracts flying over using modern web technologies). There is an EHR service model and API but I’d say it is not as mature as rest of the specs.

3) Vendors (and the World for that matter) can benefit from 1) mappings; and then 2) better FHIR Resources in terms of more effectively managing the semantic ‘impedance mismatch’ problem. An example is medication – I’d assume an HIS data model for representing medication should have at least the same granularity as the FHIR Resource it ought to fill in (practically only the mandatory items). If any less you’re in trouble – but having a sound model will ease the HIS internal data model matching and help with deciding which part is 80% vs. 20%.

4) Needless to say vendors/national programmes using pure openEHR vs. FHIR + something else will benefit hugely. Even ones using CDA – remember some countries are using (or just starting as in New Zealand) Archetypes as a reference library and then creating payload definitions (e.g. CDA) from these. So having this openEHR – FHIR connection will help transition those CDA based implementations to FHIR. Interesting outcome ;)

5) I think in the long run vendors can see the bigger picture around dealing with health information inside their systems and perhaps start refactoring or rebuilding parts of it; e.g. clinical data repositories. An HIS with sound data model will likely to produce better FHIR instances and definitely have more capability for using that information for things like advanced decision support etc.

All for now…"

And then the conversation continues, including Thomas Beale from @oceaninfo and Borut Fabjan from @marandlab. I know of many others who have expressed a strong desire for openEHR clinical content and FHIR to be more aligned and collaborative.

FHIR seems here to stay - it is gathering fantastic momentum. The openEHR methodology for developing clinical content is also gathering momentum, including national program adoption in a number of countries and in clinical registries.

Chuck Jaffe (CEO of HL7) emphasised the need for collaboration between standards at last week's Joint Inter-Ministerial Policy Dialogue on eHealth Standardization and Second WHO Forum on eHealth Standardization and Interoperability at the World Health Organisation in Geneva.

So let's do it.

We all live in the real world and need to be more proactive in working together.

It is time to stop the 'religious wars', especially the long-time, tedious 'not invented here' argument between openEHR and HL7 and the ever-ongoing lets 'reinvent the wheel' approach to EHR content that occurs more broadly.

I call on all participants in the eHealth standards world to get the 'best of breed' standards working together.

We need the domain experts!

It certainly helps to be a clinician, although recent work on development of clinical content specifications for a Hearing Health application has taken me further into modelling for the range of audiometry, 226Hz and high frequency tympanometry, audiology speech testing, and hearing screening than I’d ever imagined. Modelling the raw data capture (or downloaded from devices) for these tests is really quite simple, but enabling the complexity of different states, events and protocols that reflect audiological practice has been much more complex than I anticipated. I attempted to model these some years ago, based on my (obviously rather poor) research on the web at the time. Take a look at my meagre effort to build the original archetype for Audiogram Result (as built by a GP who has never performed an audiogram).

Audiogram Result Mindmap

See the full detail here - http://www.openehr.org/ckm/#showArchetype_1013.1.44_1

And the most recent archetype as designed and verified by practising grassroots Audiologists… I didn’t even get the name of the concept correct!

Audiometry Result Mindmap

See the full detail here - http://dcm.nehta.org.au/ckm/#showArchetype_1013.1.1097

Identifying domain experts for the development and then collaboration on verification/validation of each type of archetype/template is absolutely critical for success.

Engaging clinicians: building EHR specifications

There is a methodology that is pragmatically evolving from my experience in openEHR clinical modelling work over the past few years. It has developed in a rather ad hoc way, and totally in response to working directly with clinicians. The simplicity and apparent effectiveness – both for me and the clinicians involved - continues to surprise me each time I use it. The clinical content specifications for specialised health records and care plans that we are building are being developed with a sequence of expert input and clinical verification:

  1. Identifying the clinical requirements and business rules in conjunction with a selected initial domain expert group;
  2. Broader abstract verification of the notion of ‘maximal data set’ for ‘universal use case’ during formal archetype review cycles;
  3. Contextual validation during template review by ‘on-the-ground’ clinicians; and finally, although to a lesser degree,
  4. Validation during mapping and migration of legacy data.

With each project I am refining this process. Starting off a project with face-to-face meetings has been a ‘no brainer’ – after all, it takes a while for everyone to understand the get the idea of what we are doing. However after initial workshops, pretty much everything else can be done via web conference, online collaboration via CKM and email.

I find the initial workshops are usually greatly satisfying. Within hours we can be creating two outputs – a mind map that reflects the clinicians evolving conversation about their requirements and, in parallel, an equally agile template of clinical content specifications that can be verified by the clinicians in real time.

The mind map is displayed on a shared screen or via a data projector and acts as a living document, evolving as we talk through the clinical requirements, and identify the complexities, dependencies and relationships of all the components. The final mind map may be surprisingly different to how it started, and at the end of the conversation, the clinicians can verify that what they’ve said if accurately reflected in the mind map. It is an open source tool, so we can also share this around after the workshop for further comments.

Subsection of a mind map

Most recently I have begun building a template on the fly during the workshop, using any existing archetypes that are available, and identifying gaps or the need for new archetypes on the mind map as we go. In this way we are actually building the content specification in front of the clinicians as well. They get an understanding of how the abstract discussion will actually shape the resulting EHR content and they can verify it as we gradually pull it together. The domain experts are immediately equipped to answer the question: “Does this specification match what you have been telling me you do in practice?”

Same subsection of the mindmap as a template specification

This methodology seems to bring the clinicians along with us on the clinical modelling journey, and most are able to understand at least some of the implications of some of our requirements discussions and, in particular, the ‘shape’ of the data that we can collect. It is a process seems to suit the thinking process of many clinicians and the overwhelmingly consistent feedback from recent workshops is that they have all actually enjoyed the experience and want to know what are the next steps for them to be involved. So that’s certainly a winner.

And the funders/jurisdictions are anecdotally confirming for me that they are finding that this approach is supporting higher quality specifications in a much shorter time frame.

For example, at a project kickoff workshop for a new project recently, in two days we:

  • developed a series of mind maps capturing a consensus view of the clinical requirements and business processes;
  • identified all the archetypes required for the entire project, including those that existed and were ‘fit for use’, those that needed some extension to meet requirements and new archetypes that needed to be created;
  • identified sources of information or mind mapped the requirements for each new archetype identified; and
  • built 3 templates comprising all of the existing archetypes available from a number of sources – the NEHTA CKM http://dcm.nehta.org.au/ckm/, the openEHR international CKM http://www.openehr.org/ckm/ and local drafts that I had on my own computer. For a number of the new archetypes we also collectively identified source information that would inform or be the basis for the archetype development.

All of this described above took 8 medical practitioners clinicians away from their everyday practice for only 1-2 days, each according to their availability. Yet it provided the foundation for development of a new clinical application.

Then I go home. Next steps are to refine the mind map, modify/update/specialise any archetypes for which we have identified new requirements and build the new ones. And in parallel start the collaborative process through a CKM project to ensure that existing and modified archetypes are ‘fit for (our project’s) purpose’, and to upload and initiate reviews on the new draft archetypes.

All work to progress these archetypes to maturity (ie aiming for clinical consensus) and then validate the templates as ready for handover to the implementers can be done online, asynchronously and at a time convenient to the clinicians work/life balance!

Clinician-friendly view of the same template in CKM

I live over 2000 kilometres away from these clinicians. Yet the combination of web conference and CKM enables us to operate as an ongoing collaborative team. It seems to be working well at the moment... No doubt I'll continue to learn how to do it better.

The Archetype Journey...

I'm surprised to realise I've been building archetypes for over 7 years. It honestly doesn't feel that long. It still feels like we are in the relatively early days of understanding how to model clinical archetypes, to validate them and to govern them. I am learning more with each archetype I build. They are definitely getting better and the process more refined. But we aren't there yet. We have a ways to go! Let me try to share some idea of the challenges and complexities I see…

We can build all kinds of archetypes for different purposes.

There are the ones we just want to use for our own project or purpose, to be used in splendid isolation. Yes, anyone can build an archetype for any reason. Easy as. No design constraints, no collaboration, just whatever you want to model and as large or complex as you like.

But if you want to build them so that they will be re-used and shared, then a whole different approach is required. Each archetype needs to fit with the others around it, to complement but not duplicate or overlap; to be of the same granularity; to be consistent with the way similar concepts are modelled; to have the same principles regarding the level of detail modelled; the same approach to defining scope; and of course the same approach to defining a clinical concept versus a data element or group of data elements… The list goes on.

Some archetypes are straightforward to design and build, for example all the very prescriptive and well recognised scales like the Braden Scale or Glasgow Coma Scale. These are the 'no brainers' of clinical modelling.

Some are harder and more abstract, such as those underpinning a clinical decision support system of orders and activities to ensure that care plans are carried out, clinical outcomes achieved and patients don't 'fall through the cracks' from transitions of care.

Then there are the repositories of archetypes that are intended to work as single, cohesive pool of models – each archetype for a single clinical concept that all sits closely aligned to the next one, but minimising any duplication or overlap.Archetype ecosystem

That is a massive coordination task, and one that I underestimated hugely when we embarked on the development of the openEHR Clinical Knowledge Manager, and especially more recently, the really active development and coordination required to manage the model development, collaboration and management process within the Australian CKM – where the national eHealth program and jurisdictions are working within the same domain of models, developing new ones for specific purposes and re-using common, shared models for different use cases and clinical contexts.

The archetype ecosystems are hard, numbers of archetypes that need to work together intimately and precisely to enable the accurate and safe modelling of clinical data. Physical examination is the perfect example that has been weighing on my mind now for some time. I've dabbled with small parts of this over the years, as specific projects needed to model a small part of the physical exam here and there. My initial focus was on modelling generic patterns for inspection, palpation, auscultation and percussion – four well identified pillars of the art of clinical examination. If you take a look at the Inspection archetype clinicians will recognise the kind of pattern that we were taught in First Year of our Medical or Nursing degrees. And I built huge mind maps to try to anticipate how the basic generic pattern could be specialised or adapted for use in all aspects of recording the inspection component of clinical examination.mindmapOver time, I have convinced myself that this would not work, and so the ongoing dilemma was how to approach it to create a standardised, yet extraordinarily flexible solution.

Consider the dilemma of modelling physical examination. How can we capture the fractal nature of physical examination? How can we represent the art of every clinician's practice in standardised archetypes? We need models that can be standardised, yet we also need to be able to respond to the massive variability in the requirements and approach of each and every clinician. Each profession will record the same concept in different levels of detail, and often in a slightly different context. Each specialty will record different combinations of details. Specialists need all the detail; generalists only want to record the bare basics, unless they find something significant in which case they want to drill down to the nth degree. And don't forget the ability to just quickly note 'NAD' as you fly past to the next part of the examination; for rheumatologists to record a homunculus; for the requirement for addition of photos or annotated diagrams! Ha – modelling physical examination IS NOT SIMPLE!

I think I might have finally broken the back of the physical examination modelling dilemma just this week. Seven years after starting this journey, with all this modelling experience behind me! The one sure thing I have learned – a realisation of how much we don't know. Don't let anyone tell you it is easy or we know enough. IMO we aren't ready to publish standards or even specifications about this work, yet. But we are making good, sound, robust progress. We can start to document our experience and sound principles.

This new domain of clinical knowledge management is complex; nobody should be saying we have it sorted...

Archetype Re-use

Last Thursday & Friday @hughleslie and I presented a two day training course on openEHR clinical modelling. Introductory training typically starts with a day to provide an overview – the "what, why, how" about openEHR, a demo of the clinical modelling methodology and tooling, followed by setting the context about where and how it is being used around the world. Day Two is usually aimed at putting away the theoretical powerpoints and getting everyone involved - hands on modelling. At the end of Day One I asked the trainees to select something they will need to model in coming months and set it as our challenge for the next day. We talked about the possibility health or discharge summaries – that's pretty easy as we largely have the quite mature content for these and other continuity of care documents. What they actually sent through was an Antineoplastic Drug Administration Checklist, a Chemotherapy Ambulatory Care Nursing Intervention and Antineoplastic Drug Patient Assessment Tool! Sounded rather daunting! Although all very relevant to this group and the content they will have to create for the new oncology EHR they are building.

Perusing the Drug Checklist ifrst - it was easily apparent it going to need template comprising mostly ACTION archetypes but it meant starting with some fairly advanced modelling which wasn't the intent as an initial learning exercise.. The Patient Assessment Tool, primarily a checklist, had its own tricky issues about what to persist sensibly in an EHR. So we decided to leave these two for Day Three or Four or..!

So our practical modelling task was to represent the Chemotherapy Ambulatory Care Nursing Intervention form. The form had been sourced from another hospital as an example of an existing form and the initial part of the analysis involved working out the intent of the form .

What I've found over years is that we as human beings are very forgiving when it comes to filling out forms – no matter how bad they are, clinical staff still endeavour to fill them out as best they can, and usually do a pretty amazing job. How successful this is from a data point of view, is a matter for further debate and investigation, I guess. There is no doubt we have to do a better job when we try to represent these forms in electronic format.

We also discussed that this modelling and design process was an opportunity to streamline and refine workflow and records rather than perpetuating outmoded or inappropriate or plain wrong ways of doing things.

So, an outline of the openEHR modelling methodology as we used it:

  1. Mind map the domain – identify the scope and level of detail required for modelling (in this case, representing just one paper form)
  2. Identify:
    1. existing archetypes ready for re-use;
    2. existing archetypes requiring modification or specialisation; and
    3. new archetypes needing development
  3. Specialise existing archetypes – in this case COMPOSITION.encounter to COMPOSITION.encounter-chemo with the addition of the Protocol/Cycle/Day of Cycle to the context
  4. Modify existing archetypes – in this case we identified a new requirement for a SLOT to contain CTCAE archetypes (identical to the SLOT added to the EVALUATION.problem_diagnosis archetype for the same purpose). Now in a formal operational sense, we should specialise (and thus validly extend) the archetype for our local use, and submit a request to the governing body for our additional requirements to be added to the governed archetype as a backwards compatible revision.
  5. Build new archetypes – in this case, an OBSERVATION for recording the state of the inserted intravenous access device. Don't take too much notice of the content – we didn't nail this content as correct by any means, but it was enough for use as an exercise to understand how to transfer our collective mind map thoughts directly to the Archetype Editor.
  6. Build the template.

So by the end of the second day, the trainee modellers had worked through a real-life use-case including extended discussions about how to approach and analyse the data, and with their own hands were using the clinical modelling tooling to modify the existing, and create new, archetypes to suit their specific clinical purpose.

What surprised me, even after all this time and experience, was that even in a relatively 'new' domain, we already had the bulk of the archetypes defined and available in the NEHTA CKM. It just underlines the fact that standardised and clinically verified core clinical content can be re-used effectively time and time again in multiple clinical contexts.The only area in our modelling that was missing, in terms of content, was how to represent the nurses assessment of the IV device before administering chemo and that was not oncology specific but will be a universal nursing activity in any specialty or domain.

So what were we able to re-use from the NEHTA CKM?

…and now that we have a use-case we could consider requesting adding the following from the openEHR CKM to the NEHTA instance:

And the major benefit from this methodology is that each archetype is freely available for use and re-use from a tightly governed library of models. This openEHR approach has been designed to specifically counter the traditional EHR development of locked-in, proprietary vendor data. An example of this problem is well explained in a timely and recent blog - The Stockholm Syndrome and EMRs! It is well worth a read. Increasingly, although not so obvious in the US, there is an increasing momentum and shift towards approaches that avoid health data lock-in and instead enable health information to be preserved, exchanged, aggregated, integrated and analysed in an open and non-proprietary format - this is liquid data; data that can flow.

Warts and all: Reviewing an archetype

During my visit to HIMSS12 in February, I finally met Jerry Fahrni (@JFahrni) face to face - a pharmacist and Twitter colleague I'd had 140 character conversations over some years. We'd also talked on Skype once about some of the clinical archetypes some time ago, and during our HIMSS conversation I managed to persuade him to take a look at the openEHR community's Adverse Reaction archetype and participate in the community review.

He did, and at my further request he has put ink to blog and has recorded his experience as a newbie reviewer so that others might have some sense of what completing an archetype review entails, warts and all.

Jerry's review, reproduced here:

...According to good ol’ Merriam-Webster an archetype is “the original pattern or model of which all things of the same type are representations or copies: also : a perfect example“. Simple enough, but still too vague for my brain so I went in search of a better explanation which I found at Heather’s blog – Archetypical.

According to the Archetypical site ”openEHR archetypes are computable definitions created by the clinical domain experts for each single discrete clinical concept – a maximal (rather than minimum) data-set designed for all use-cases and all stakeholders. For example, one archetype can describe all data, methods and situations required to capture a blood sugar measurement from a glucometer at home, during a clinical consultation, or when having a glucose tolerance test or challenge at the laboratory. Other archetypes enable us to record the details about a diagnosis or to order a medication. Each archetype is built to a ‘design once, re-use over and over again’ principle and, most important, the archetype outputs are structured and fully computable representations of the health information. They can be linked to clinical terminologies such as SNOMED-CT, allowing clinicians to document the health information unambiguously to support direct patient care. The maximal data-set notion underpinning archetypes ensures that data conforming to an archetype can be re-used in all related use-cases – from direct provision of clinical care through to a range of secondary uses.” That gave me a better understanding of what they were trying to do.

Anyway, when Heather asked me to review the Adverse Reaction archetype I was a little hesitant. The projects I’m asked to be involved with are typically much smaller in scale. This was something different and I felt a little intimidated. My gut reaction was to politely decline, but when someone asks you to do something face to face it makes excusing yourself for some lame reason a lot harder. So I agreed with more than a bit of trepidation.

The openEHR project utilizes a system called the Clinical Knowledge Manager (CKM). In the most basic terms, the CKM is an online content management system for all the archetypes being designed by the openEHR project, and it’s impressive. A more in depth description can be found here.

Logging into the system was simple. The email invitation I received to review the Adverse Reaction Archetype contained a link that took me to the exact location I was supposed to be. From there things got a bit more complicated. The CKM is easy enough to navigate, but the amount of information and navigational elements within the system is staggering. It took me a while to figure out exactly what I was supposed to do. Once I figured it out I was able to quickly go through the archetype, read what other comments people had made and make a couple of minor notes myself. One thing I could never completely figure out was how to save my work in the middle and continue later. Sounds simple enough, but for whatever reason it just wasn’t obvious to me. I ended up powering through my “review” in one extended session because I was afraid I’d lose my place. The archetype itself was impressive. It’s clear from the information and detail that people have spent a lot of time and effort developing the adverse reaction archetype. There’s no question that a lot of great minds had been involved in this work. The definition made sense as did the data that was being collected and presented. The archetype offered flexibility for information gathering that included the simplest form of adverse reaction to complex re-exposure and absolute contraindication notation (this is sorely missing in many systems I’ve used over my career). Overall I had little insight to offer during the review, only a couple of minor comments.

I’d say the entire process was pretty straightforward with some minor complications. Like everything else I’m sure the process would get easier over time and multiple uses.

Thanks Jerry. Your independent and honest opinion is much valued. Perhaps next time... !! (Just joking)

CIMI progress...

Just spreading the news... The Clinical Information Modelling Initiative met again recently and the minutes are now available from the early but rapidly evolving CIMI wiki site - http://www.cimiwiki.org

Intro from the latest CIMI minutes:

CIMI held its 5th group meeting in San Antonio from January 12 – 14, 2012. Over 35 people attended in person with an additional 5 participants attending via WebEx.

At this meeting, the group:

  • Established the criteria for membership and the process for adding members to the CIMI group
  • Authorised an interim executive committee
  • Determined a tentative schedule of meetings for 2012
  • Moved forward with the definition of the modeling framework
  • Formalized two task forces to begin the modeling work so that example models can be presented at the next meeting
  • Recognized the formation of a Glossary Group (lead to be announced)
  • Agreed to plans for utilizing existing tools to rapidly develop and test a candidate reference model and to create a small group of example CIMI models that build on the reference model work

Full Minutes are here

Why the buzz about CIMI?

With the recent public statement from the Clinical Information Modelling Initiative (CIMI) my cynical heart feels a little flutter of excitement. Maybe, just maybe, we are on the brink of a significant disruption in eHealth. Personally I have found that the concept of standardising clinical content to be compelling and hence my choice to become involved in development of archetypes. During my openEHR journey over the past 5 or so years it has been very interesting to watch the changing attitudes internationally - from curiosity and 'odd one out'  through to "well, maybe there's something in this after all".

And now we have the CIMI announcement...

So what has been achieved? What should we celebrate and why?

At worst, we have had a line drawn in the sand: a prominent group of thought leaders in the international health informatics domain have gathered and, through a somewhat feisty process, recognised that a collaborative approach to the development of a single logical clinical content representation (the CIMI core reference model) is a desirable basis for interoperability across formalisms. Despite most of the participants having significant investment and loyalty to their own current methodology and flavor of clinical models, they have cast aside the usual 'not invented here' shackle and identified a common approach to an initial modelling formalism from which other models will be derived or developed. Whether any common clinical content models are eventually built or not, naming of ADL 1.5 and the openEHR constraint model as the initial formalism is a significant recognition of the longstanding work of the openEHR Foundation team - the early specifications emerged nearly 20 years ago.

At its idealistic best, it potentially opens up a new chapter for health informatics, one that deviates from the relatively safe path of incremental innovation that we have followed for so many years - the reliance on messages/documents/hubs to enable us to exchange health information. There is an opportunity to take a divergent path, a potentially transformational innovation, where the focus is on the data itself, and the message/document/EHR becomes more simply just the receptacle or vehicle for the data. It could give us a very real opportunity to store lifelong health information; simplify data exchange (whether by messages or documents), aggregation, querying and analysis; and support knowledge-based activities such as decision support - all because we will (hopefully) have non-proprietary, common, agreed and fully defined models of clinical content and known transformations between each formalism.

Progress during the next few months will be telling. In January 2012, immediately before the next HL7 meeting in San Antonio, the group will gather again to discuss next steps.

There is a very real risk that despite best intentions all of this will fade away to nothing. The list of participating organisations, including high profile standards organisations and national eHealth programs, is a veritable Who's Who of international health IT royalty, so they will all come with their own (organisational and individual) work experience, existing modelling resources, hope, enthusiasm, cynicism, political agendas, bias and alliances. It could be enough to sink the work of this fledgling group.

But many are battle-weary, having been trudging down this eHealth path for a long time - some now gradually realising that the glacial incremental innovation is not delivering the long-term sustainable answers required for creating 21st Century EHRs as they had once hoped. So maybe this could be the trigger to make CIMI fly!

I think that CIMI is a very bright spark on the health IT horizon. Let's hope that with the right management and governance it can be agilely nurtured into a major positive force for change. And in the future, when its governance is mature and processes robust, we can integrate CIMI into the formal standards processes.

Best of luck, CIMI. We're watching!

CIMI & beyond...

The Clinical Information Modelling Initiative (#CIMI) is currently meeting in London. It comprises a significant group of healthcare IT stakeholders and was formed some months ago as an initiative by Dr Stan Huff. After a number of face to face meetings and email list exchanges, the intent is that at the end of this 3 day meeting there will be an agreed decision on a common clinical content modelling formalism/methodology for our Electronic Health Records. For background, from Sam Heard’s email to the openEHR email list on November 2, 2011:

The main topic I want to address is the international initiative to develop a standardised clinical modelling methodology. This has some IHTSDO secretarial support and is led by Dr Stan Huff of Intermountain Healthcare, a former HL7 Chairperson and co-founder of LOINC, who has been advocating a model-based approach for many years. The current approach at Intermountain has been influenced by openEHR and uses a two-level modelling approach. Stan has established a leadership group through trust and reputation, which includes a variety of agencies who have been working in the area and national eHealth programs or major initiatives who are interested in consuming the models. It has grown out of an HL7 Fresh Look initiative and is currently known as the Clinical Information Modelling Initiative (CIMI).

The group has committed to determining a single formalism for clinical modelling and ADL and openEHR are on the list of alternatives which is as follows:

  • Archetype Object Model/ADL 1.5 openEHR
  • CEN/ISO 13606 AOM ADL 1.4
  • UML 2.x + OCL + healthcare extensions
  • OWL 2.0 + healthcare profiles and extensions
  • MIF 2 + tools HL7 RIM – static model designer

Proponents of the five different approaches have been presenting to members of the group, who have a variety of experience in these matters. Fourteen organisations will cast a vote on the formalism to use including openEHR, Singapore, UK NHS, Results 4 Care, HL7, Canada Infoway, 13606 Association, Tolven, CDISC, GE/Intermountain, US Departments, CDISC, SMArt and Mitre.

At the preliminary vote, held recently on November 20, the two most popular options were openEHR ADL 1.5 and UML.

Today CIMI will vote on a proposal for either ADL 1.5 or UML to be adopted as the initial common formalism for use, and determine a road map for coordinated development of semantically interoperable clinical models into the future. The potential impact of this is huge and exciting. It could be a disruptive change in health IT.

We hold our collective breath!

The DCM environment - a crowdsourcing initiative

There are a number of detailed clinical models available to choose from, from diverse backgrounds, for many differing purposes and with varying degrees of success and penetration. More are arriving on the scene as time goes on – only this week the Clinical Information Models from ONC in the US have been released for their first 2 week period of public comment, and I'd never heard of them before. I haven’t been able to find a single document pointing to an overview on available models – current or past. So, given the current activity development of a quality standard around detailed clinical models in ISO TC 215 and the number of models under development, I thought it timely to try to co-ordinate sourcing a picture of the detailed clinical model landscape.

Well aware that my understanding is limited to the openEHR archetypes I work with, I thought I’d take this opportunity to draw together some of my research and request some assistance from others to crowdsource a picture of the detailed clinical model environment that we operate within.

I have published my initial attempt as a Google doc, see below. The existing information needs to be verified; there are clearly many gaps that need to be filled in; and there are probably other DCMs to be added.

EDIT the DCM environment document...

Anyone can participate, and if you edit, please add your name/org/email to the Contributor list, so that attribution can be made.

Please feel free to edit; to add new DCMs; to correct and verify the facts. Add a column, suggest changes to structure where you see a need... Just make a comment, if you like.

You can clearly see all the gaps in our collective knowledge at present…

Let’s see if we can actually pull together a useful resource. The intent is collaboration to achieve a goal that is possibly greater than most of us can achieve by ourselves.

All contributions are gratefully received and can be freely shared with anyone who is interested.

The latest crowdsourced version of the document appears below. You will need to scroll to seen the entire content.

[googleapps domain="docs" dir="document/pub" query="id=11xGZs4drfwb7TY15Iwpk52UIxZcFhqMcMp1i3yV4To8&embedded=true" /]

________________________________________

Further Comment: 11 August 2011

I've been asked where this work is going and what is planned...

The answer is that I haven’t a specific plan.

To my knowledge there has been no environmental scan to explore what the range of models is, to my knowledge, so thought I’d try myself. I spent a day to develop the initial framework, sparse though it was. Googling was a very frustrating experience and finding out the information that we’re interested in from available websites was not easy.

Hence I thought I’d experiment with the crowd sourcing approach. I know there is a rising tide of interest in the generic concept of detailed clinical models and a lot of confusion about the work in ISO and HL7.

I hoped it might contribute to clarifying the situation and providing a concrete anchor on which to base discussions on detailed clinical models.

Worst case scenario, the final result may be little more than a brainstorming, however it will at least form the basis for anyone interested to launch a more rigorous research program, including potentially feeding into the ISO work. Best case scenario, we may have a significant resource. Final outcome will probably be somewhere in between.

I’m happy to leave it online as a general resource – it seems to have generated a lot of interest. I'm hopeful that it will keep evolving.

Anatomy of a Procedure

ACTION archetypes are one of the harder concepts to grasp in the openEHR clinical modeling world. ACTIONs appear to be promising the world - allowing for recording all activities from planning, scheduling, postponing, cancelling, suspending, through to carrying out and completing an activity. While they may appear cumbersome at first, they are surprisingly elegant and fit for purpose... once you can twist your brain around them.

Consider ACTION archetypes as a walrus - awkward on land, but graceful in the water - well maybe the analogy is being stretched a little, but you get my drift.

Whatever you do, don't underestimate the power of these little clinical models. ACTIONs are designed to enable sophisticated tracking of activities being carried out in a distributed environment - perfect for managing care pathways between multiple providers in disparate locations - not an easy task.

Take a look at this slide show - my attempt to provide a concrete example of how a Procedure ACTION archetype will support documentation about how an INSTRUCTION or order for a Procedure (any procedure, potentially) is carried out in a shared electronic health record environment.

  • 'Pathway steps' are identified as the steps (not necessarily sequential) or clinical activities in which it makes sense to record some information in the health record.
  • The Data Elements that are identified in the 'Description' include any and all information that we may wish to record for any of the Pathway steps. For example, the data we need to record when we plan to perform a Procedure or the information required to record that the Procedure was performed or abandoned, including the reason for abandonment.

[slideshare id=8563247&doc=201107instructionsactions-110711064154-phpapp02]

 

Updated: August 17, 2011

Unambiguous data: Positive presence; positive absence

The bulk of openEHR archetype modelling is focused on how to record the positive presence of data – for example the diagnosis of diabetes or asthma, or orders for long-term beta blocker medication. When it is not possible to determine data values – the nature of clinical medicine being not so exact a science – 'null flavours' are used to mark an 'incapacity to obtain data', especially for a mandatory data point. The current openEHR null flavours are:

  • no information - "No information provided; nothing can be inferred as to the reason why, including whether there might be a possible applicable value or not";
  • unknown – "A possible value exists but is not provided";
  • masked – "The value has not been provided due to privacy settings"; and
  • not applicable – "No valid value exists for this data item".

If there is no data recorded at all, nor any null flavour, no conclusion can safely be drawn from the lack of data; there is a void in our knowledge.

And then there is the need to record things that are noticeably not present. Some approaches subscribe the notion of 'negation' – resulting in a somewhat awkward and potentially ambiguous statement that might go something along the lines of 'diagnosis of diabetes, not', potentially sitting closely alongside another selectable term of 'diagnosis of diabetes'. You can see that it is easy to get it wrong.

The openEHR preferred approach is to record the positive absence of data – an explicit and unambiguous statement of 'no history of diabetes' in an archetype with the explicit intent to record the absence of a diagnosis.

Specific absence of data needs to be recorded positively, where this is sensible:

  • To encourage unambiguous clinical records;
  • Assisting clinical decision support; and sometimes (unfortunately)
  • For medico-legal purposes.

Examples of global positive exclusion statements include:

  • No known adverse reactions
  • No surgical history
  • No significant family history
  • Not currently taking any medicines

Specific positive exclusion statements include:

  • No known allergy to penicillin (or <insert drug/plant/food/venom or other substance here>);
  • No family history of bowel cancer (or <insert diagnosis here>);
  • Not currently taking immunosuppressants (or); or
  • No pacemaker in situ.

Keep in mind that exclusion statements are only meaningful and relevant at the decision-making moment at which they are recorded. They have a fleeting temporal validity, and then the question has to be asked again – "Are you allergic to anything?"or "Have you ever had any serious illness or operations?".

Some examples:

  • Recording an absence of an adverse reaction event to penicillin becomes immediately obsolete when the patient suffers an anaphylactic reaction to the penicillin subsequently administered; or
  • The past history might reveal no previous diagnosis of asthma, yet asthma might be the cause for today's wheeze; or
  • In a Discharge Summary – 'Not currently taking any medications' is a vital piece of information for the patient's usual GP, yet that drug-free state might only be current until the first follow-up visit.

So exclusions cannot be relied upon for any future decision-making. They are just 'in the moment' statements that need to be re-asked and regularly updated in records (both paper and electronic) as part of routine clinical practise.

So while they may be valuable, does that mean we desire to model every possible negative statement positively? This is certainly not desirable, nor is it helpful. A pragmatic approach is to research those situations where explicitly modelling exclusion statements provide some obvious value; preferably those that will have consistent re-use or may be utilised in knowledge-based activities such as Clinical Decision Support (CDS) systems.

Improved outcomes may result if CDS systems prompt for the recording a positive statement of:

  • 'no known adverse reactions' prior to prescribing medicines; or
  • 'no history of asthma' prior to prescribing beta blockers.

Currently we have archetypes for recording the presence of an Adverse Reaction, a Problem/Diagnosis, Family History and Medicines. In addition we have a generic parent archetype for Exclusion (ie the positive absence) plus specialisations for Adverse Reaction, Medication, Family History and Problem/Diagnosis exclusions so far.  The generic Exclusion archetype can be used as a standardised way to record the more uncommon statements – 'Not (currently) pregnant' might be a candidate here. But if there are regular use cases identified, these too may warrant their own explicit archetypes to ensure that positive absence is recorded consistently and safely.

Modelling pregnancy

How to go about using openEHR archetypes to model a shared pregnancy record? Step 1:  Analyse the existing resources; identify each of the discrete clinical concepts that contribute to the whole record.
24 clinical concepts have been identified within this example Pregnancy Health Record.

Step 2: Map clinical concepts to archetypes

Map them to existing archetypes or determine which new ones we need. Determine the state of the existing archetypes – do we need to modify or enhance with additional requirements? Consider all possible use cases for each archetype – the end result will be better if we can be as inclusive as possible for all clinical scenarios.

The archetypes identified for this Pregnancy record are:

Clinical Concept Comment

Corresponding archetype concept name NOTE: is linked to the actual archetype in a CKM instance

Antenatal Plan An overarching plan for the patient's antenatal care Antenatal Plan <INSTRUCTION.antenatal_plan>
Information Provided Details of each piece of health education information discussed or provided to patient Information Provided <ACTION.health_education>
Social Summary Social Summary <EVALUATION.social_summary>
Adverse Reaction List Adverse Reaction list is made up of multiple Adverse Reaction entries Adverse Reaction <EVALUATION.adverse_reaction>
Family History List Family History list is made up of multiple Family History entries Family History <EVALUATION.family_history>
Diagnosis Problem lists, whether current or past are made up of multiple Problem/Diagnosis entries – the same data pattern will be used in all instances Problem/Diagnosis <EVALUATION.problem_diagnosis>
Problem/Diagnosis List
Recent Problem/Diagnosis List
Procedure list Procedure list is made up of multiple Procedure entries Procedure <ACTION.procedure>
Smoking consumption Tobacco use <OBSERVATION.substance_use-tobacco>
Alcohol consumption Alcohol consumption <OBSERVATION.substance_use-alcohol>
Obstetric Summary Obstetric summary <EVALUATION.obstetric_summary>
Pregnancy Summary An evolving summary of key data about a single pregnancy – either current or past Pregnancy summary <EVALUATION.pregnancy_summary>
Examination Findings Examination Findings <OBSERVATION.exam> Examination of the uterus <CLUSTER.exam-uterus> Examination of the fetus <CLUSTER.exam-fetus>
Current Medication List Medication list is made up of multiple Medication instruction entries Medication instruction <INSTRUCTION.medication>
Medication administered Medication action <ACTION.medication>
Height Height/Length <OBSERVATION.height>
Weight Body weight <OBSERVATION.body_weight>
Body Mass Index Body Mass Index <OBSERVATION.body_mass_index>
Blood Pressure Blood Pressure <OBSERVATION.blood_pressure>
Fetal Movements Fetal Movements <OBSERVATION.fetal_movements>
Fetal heart sounds Heart rate and rhythm <OBSERVATION.heart_rate>
Urinalysis The pattern for urinalysis is identical to the pathology test results, and in some cases will be performed in the lab, so the same pattern will be used Pathology test result <OBSERVATION.pathology_test>
Pathology Test results
Ultrasound results Imaging examination result <OBSERVATION.imaging_exam>

KEY: Pink archetype names indicate those that are early drafts. Green archetype names indicate those that are reasonably mature – through previous reviews within CKM Red archetype names indicate those yet to be developed.

Step 3: CKM Review

Within CKM each archetype will undergo review rounds by a range of clinicians, terminologists, software engineers, informaticians and other interested experts. The purpose of each review round will be to fine-tune each archetype so that it meets the requirements for each stakeholder group.

CKM Videos:

Step 4: Create an openEHR template

Once community consensus is reached within CKM, the archetypes will be aggregated and constrained in a template to accurately meet the specific use-case requirements for this particular Pregnancy Health Record. Each bold, black heading in the the following screenshot represents an archetype. The Pregnancy Summary archetype has been opened up to see the details. Black text indicates data elements that will be utilised in this use case; grey text indicates inactive data elements.

Other groups may use the same archetypes aggregated in a different order and constrained in a different way to represent their version of the Pregnancy Health Record. As the data in both Pregnancy Records is being recorded according to the same pattern, dictated by the underlying archetype, the information can be exchanged without ambiguity or loss of meaning.

Quality indicators & the wisdom of crowds

Harnessing the 'wisdom of the crowd' has potential to more powerful than traditional quality processes for a determining the quality of our clinical content in EHRs - we need to learn how best to tap into that wisdom...

Archetype quality I

Up until recently clinical content models, such as archetypes, have been regarded as a novelty; watched from the sidelines with interest from many but not regarded as mainstream. However now that they are increasingly being adopted by jurisdictions and used in real systems, modellers need to change their approach to include processes, methodologies and quality criteria that ensure that the models are robust, credible and fit for purpose. There has been some work done identifying quality criteria for clinical models but there is no doubt that establishing quality criteria for clinical content models is still very much in its infancy:

  • There has been some slowly progressing work in ISO TC 215 - ISO 13972 Health Informatics: Detailed Clinical Models. Recently it has been split into two separate components, not yet publicly available:
    • Part 1. Quality processes regarding detailed clinical model development, governance, publishing and maintenance; and
    • Part II - Quality attributes of detailed clinical models

Most of the work on quality of clinical models has been based largely on theory, with few groups having practical experience in developing and managing collections of clinical models, other than in local implementations.

In 2007, Ocean Informatics participated in a significant pilot project. The recommendations were published in the NHS CFH Pilot Study 2007 Summary Report. My own analysis, conducted in December 2007, revealed that there were 691 archetypes within the NHS repository. Of these, 570 were archetypes for unique clinical concepts, with the remainder reflecting multiple versions of the same concept. In fact, for 90 unique concepts there were 207 archetypes that needed rationalisation – most of these had only two versions however one archetype was represented in five versions! We needed better processes!

Towards the end of 2007 a small team within Ocean commenced building an online tool, the Clinical Knowledge Manager to:

  • function as a clinical knowledge repository for openEHR archetypes and templates and, later, terminology subsets;
  • manage the life-cycle of registered artefacts, especially the archetype content – from draft, through team review to published, deprecated and rejected. Also terminology binding and language translations;
  • governance of the artefacts.

In July 2008 we started uploading archetypes to the openEHR CKM, including many of the best from the NHS pilot project. Over the following months we added archetypes and templates; recruited users; and started archetype reviews. All activity was voluntary – both from reviewers and editors. Progress has thus been slower than we would have liked and somewhat episodic but provided early evidence that a transparent, crowd sourced verification of the archetypes was achievable.

In early 2010, Sweden's Clinical Knowledge Manager had its first archetypes uploaded.

In November 2010, a NEHTA instance of the CKM was launched, supporting Australia's development of Detailed Clinical Models for the national eHealth priorities. This is where most collaborative activity is occurring internationally at present.

In this context, I have pondered the issues around clinical knowledge governance now for a number of years, and gradually our team has developed considerable insight into clinical knowledge governance – the requirements, solutions and thorny issues. To be perfectly honest, the more we delve into knowledge governance, the more complicated we realise it to be – the challenge and the journey continues; a lot is yet to be solved :)

It is relatively easy to identify the high level processes in the development of clinical knowledge artifacts, each of which requires identification of quality criteria and measurable indicators to ensure that the final artifacts are fit for purpose and safe to use in our EHR systems. The process is similar for both archetypes and templates; plus the Requirements gathering and Analysis components are applicable to any single overarching project as well.

For archetypes:

The harder task is that for each of these steps, there are multiple quality criteria that need to be determined, and for each criterion it will be necessary to be able to assess and/or measure them through identifiable quality indicators.

Ideally a quality indicator is a measurement or fact about the clinical model. In some situations it will be necessary to include additional assessments manually performed by qualified experts.

If an indicator can be automatically derived from the Clinical Knowledge Manager (CKM), it ensures that up-to-date assessments of the models are instantly available as the models evolve (such as this Blood Pressure archetype example), and more importantly, without reliance on manual human intervention. However while assessments that do need to be assessed by an expert human – for example, compliance to existing specifications or standards – add valuable depth and richness to the overall quality assessment, they also add a vulnerability due to the need for skilled human resources to not only conduct the assessment, but to apply consistent methodologies during the assessment; these will be much more difficult to sustain.

Assessment of whether the indicators actually satisfy the quality criteria should also ideally be as objective as possible, however our reality is that it will probably more often be subjective and vary depending on the nature of the archetype concept itself. The process cannot be automated, nor can there be a single set of indicators or criteria that will determine the quality of every archetype. We need to ensure appropriate oversight to archetype development, ensuring that a quality process has actually been followed and utilise quality indicators to determine if the quality criteria have been met - on an archetype by archetype basis.

And it continues on...

Clinical modeling: academic vs practical

In a comment on the previous DCM post,  Gordon Tomes sent a link to an interesting 2009 academic paper by Scheuermann, Ceusters & Smith - Toward an Ontological Treatment of Disease and Diagnosis [PDF] There is a place for this 'pure', academic approach as a means to seek clarity in definitions, but a telling sentence in the paper for me is: "Thus we do not claim that ‘disease’ as here defined denotes what clinician in every case refer to when they use the term ‘disease’. Rather, our definitions are designed to make clear that such clinical use is often ambiguous."

This is totally right, but despite the apparent ambiguity this clinicians still manage :)

The approach for archetypes/DCMs is not to get tied up in these definitional knots unnecessarily but to concentrate on consensus building around the structure of the data. Use of ontologies and definitions are key elements in designing and modelling archetypes but the resulting models should not be based on academic theory but on existing clinical practice and processes. Our EHRs need to represent what clinicians actually need to do in order to provide care. Sometimes we need to be practical and pragmatic. For example, I once challenged a dissenter to build a Blood Pressure archetype based purely using SNOMED codes - the result was a nonsense, a model that he agreed was not useable by clinicians.

The approach to building the Problem/Diagnosis archetype has not been to try to differentiate these two terms pedantically. Many have tried to separate a 'Problem' from a 'Diagnosis' for years with little success - no point waiting for this to be resolved, it probably won't. And even if we did make a theoretical decision on a definition for each, the clinicians would still likely classify the way they always did, not the way the academics or standards-makers would like them to do it!

In the  collaborative CKM reviews for the NEHTA Problem/Diagnosis archetype we have been able to observe that clinicians and other stakeholders have achieved some consensus around the structured data required to represent the concepts of a problem plus the addition of some extra optional data elements to represent formal diagnoses. After completing only four review rounds, the discussion now is focused on finessing the metadata and descriptions, not on debating the structure of the model.

Clinicians may still go round in circles arguing about what constitutes an actual problem vs a diagnosis - for example, some may classify  heartburn as a problem, others as diagnosis. No matter. By using a single model to represent both we can ensure that no matter how heartburn may be labelled by any individual clinician, we can easily query and find the data in a health record, and in addition there is a single common model to be referenced by clinical decision support engines.

A small success, maybe.

Engaging Clinicians in Clinical Content (in Sarajevo)

Just browsing and found the link to our presentation for a paper the CKM team gave at the Medical Informatics Europe conference in Bosnia, 2009. Thought I'd share it here, in memory of an amazing conference and location:

MIE09: Engaging Clinicians in Clinical Content [PDF]

Our presentation:

[slideshare id=1958920&doc=engagingcliniciansinclinicalcontent-090906091020-phpapp02]

And the reference to Herding Cats is explained (a little) in the embedded video - actually an advertisement for EDS:

[youtube http://www.youtube.com/watch?v=Pk7yqlTMvp8&w=425&h=349]

I arrived after a 37 hour flight - Melbourne - London - Vienna - Sarajevo to join my Ocean CKM team - Ian McNicoll and Sebastian Garde. We presented our paper and also ran an openEHR workshop.

The conference was held at the Holiday Inn in Sarajevo - the hotel in which all the international journalists were holed up during the war, on 'Sniper Alley'.

Despite it being 14 years after the war had ended, raw emotions were palpable many times during the conference. That a pan-European conference was being held in his beloved Sarajevo under his auspice was a very overwhelming event for the Professor of Informatics and he was often seen in tears!

From my hotel room window in the centre of the city I could see 6 cemeteries.

Walking through the inner city we came across many cemeteries, thousands of marble gravestones with the majority representing the young men aged between 18 & 26.

Bullet holes were still very obvious on the walls of buildings.

Yet the city was vibrant, alive and welcoming.

I won't ever forget this visit.

Some photos of the experience: