Archetype quality I

Up until recently clinical content models, such as archetypes, have been regarded as a novelty; watched from the sidelines with interest from many but not regarded as mainstream. However now that they are increasingly being adopted by jurisdictions and used in real systems, modellers need to change their approach to include processes, methodologies and quality criteria that ensure that the models are robust, credible and fit for purpose. There has been some work done identifying quality criteria for clinical models but there is no doubt that establishing quality criteria for clinical content models is still very much in its infancy:

  • There has been some slowly progressing work in ISO TC 215 - ISO 13972 Health Informatics: Detailed Clinical Models. Recently it has been split into two separate components, not yet publicly available:
    • Part 1. Quality processes regarding detailed clinical model development, governance, publishing and maintenance; and
    • Part II - Quality attributes of detailed clinical models

Most of the work on quality of clinical models has been based largely on theory, with few groups having practical experience in developing and managing collections of clinical models, other than in local implementations.

In 2007, Ocean Informatics participated in a significant pilot project. The recommendations were published in the NHS CFH Pilot Study 2007 Summary Report. My own analysis, conducted in December 2007, revealed that there were 691 archetypes within the NHS repository. Of these, 570 were archetypes for unique clinical concepts, with the remainder reflecting multiple versions of the same concept. In fact, for 90 unique concepts there were 207 archetypes that needed rationalisation – most of these had only two versions however one archetype was represented in five versions! We needed better processes!

Towards the end of 2007 a small team within Ocean commenced building an online tool, the Clinical Knowledge Manager to:

  • function as a clinical knowledge repository for openEHR archetypes and templates and, later, terminology subsets;
  • manage the life-cycle of registered artefacts, especially the archetype content – from draft, through team review to published, deprecated and rejected. Also terminology binding and language translations;
  • governance of the artefacts.

In July 2008 we started uploading archetypes to the openEHR CKM, including many of the best from the NHS pilot project. Over the following months we added archetypes and templates; recruited users; and started archetype reviews. All activity was voluntary – both from reviewers and editors. Progress has thus been slower than we would have liked and somewhat episodic but provided early evidence that a transparent, crowd sourced verification of the archetypes was achievable.

In early 2010, Sweden's Clinical Knowledge Manager had its first archetypes uploaded.

In November 2010, a NEHTA instance of the CKM was launched, supporting Australia's development of Detailed Clinical Models for the national eHealth priorities. This is where most collaborative activity is occurring internationally at present.

In this context, I have pondered the issues around clinical knowledge governance now for a number of years, and gradually our team has developed considerable insight into clinical knowledge governance – the requirements, solutions and thorny issues. To be perfectly honest, the more we delve into knowledge governance, the more complicated we realise it to be – the challenge and the journey continues; a lot is yet to be solved :)

It is relatively easy to identify the high level processes in the development of clinical knowledge artifacts, each of which requires identification of quality criteria and measurable indicators to ensure that the final artifacts are fit for purpose and safe to use in our EHR systems. The process is similar for both archetypes and templates; plus the Requirements gathering and Analysis components are applicable to any single overarching project as well.

For archetypes:

The harder task is that for each of these steps, there are multiple quality criteria that need to be determined, and for each criterion it will be necessary to be able to assess and/or measure them through identifiable quality indicators.

Ideally a quality indicator is a measurement or fact about the clinical model. In some situations it will be necessary to include additional assessments manually performed by qualified experts.

If an indicator can be automatically derived from the Clinical Knowledge Manager (CKM), it ensures that up-to-date assessments of the models are instantly available as the models evolve (such as this Blood Pressure archetype example), and more importantly, without reliance on manual human intervention. However while assessments that do need to be assessed by an expert human – for example, compliance to existing specifications or standards – add valuable depth and richness to the overall quality assessment, they also add a vulnerability due to the need for skilled human resources to not only conduct the assessment, but to apply consistent methodologies during the assessment; these will be much more difficult to sustain.

Assessment of whether the indicators actually satisfy the quality criteria should also ideally be as objective as possible, however our reality is that it will probably more often be subjective and vary depending on the nature of the archetype concept itself. The process cannot be automated, nor can there be a single set of indicators or criteria that will determine the quality of every archetype. We need to ensure appropriate oversight to archetype development, ensuring that a quality process has actually been followed and utilise quality indicators to determine if the quality criteria have been met - on an archetype by archetype basis.

And it continues on...

Clinical modeling: academic vs practical

In a comment on the previous DCM post,  Gordon Tomes sent a link to an interesting 2009 academic paper by Scheuermann, Ceusters & Smith - Toward an Ontological Treatment of Disease and Diagnosis [PDF] There is a place for this 'pure', academic approach as a means to seek clarity in definitions, but a telling sentence in the paper for me is: "Thus we do not claim that ‘disease’ as here defined denotes what clinician in every case refer to when they use the term ‘disease’. Rather, our definitions are designed to make clear that such clinical use is often ambiguous."

This is totally right, but despite the apparent ambiguity this clinicians still manage :)

The approach for archetypes/DCMs is not to get tied up in these definitional knots unnecessarily but to concentrate on consensus building around the structure of the data. Use of ontologies and definitions are key elements in designing and modelling archetypes but the resulting models should not be based on academic theory but on existing clinical practice and processes. Our EHRs need to represent what clinicians actually need to do in order to provide care. Sometimes we need to be practical and pragmatic. For example, I once challenged a dissenter to build a Blood Pressure archetype based purely using SNOMED codes - the result was a nonsense, a model that he agreed was not useable by clinicians.

The approach to building the Problem/Diagnosis archetype has not been to try to differentiate these two terms pedantically. Many have tried to separate a 'Problem' from a 'Diagnosis' for years with little success - no point waiting for this to be resolved, it probably won't. And even if we did make a theoretical decision on a definition for each, the clinicians would still likely classify the way they always did, not the way the academics or standards-makers would like them to do it!

In the  collaborative CKM reviews for the NEHTA Problem/Diagnosis archetype we have been able to observe that clinicians and other stakeholders have achieved some consensus around the structured data required to represent the concepts of a problem plus the addition of some extra optional data elements to represent formal diagnoses. After completing only four review rounds, the discussion now is focused on finessing the metadata and descriptions, not on debating the structure of the model.

Clinicians may still go round in circles arguing about what constitutes an actual problem vs a diagnosis - for example, some may classify  heartburn as a problem, others as diagnosis. No matter. By using a single model to represent both we can ensure that no matter how heartburn may be labelled by any individual clinician, we can easily query and find the data in a health record, and in addition there is a single common model to be referenced by clinical decision support engines.

A small success, maybe.

openEHR: interoperability or systems?

Thomas Beale (CTO of Ocean Informatics and chair of the Architecture Review Board of the openEHR Foundation) posted these two paragraphs as part of the background for his recent Woland's Cat post - The Null Flavour debate - part I. It is an important statement that I don't want to get lost amongst other discussion, so I've reposted it here:

An initial comment I will make is that there is a notion that openEHR is ‘about defining systems’ whereas HL7 ‘is about interoperability’. This is incorrect. openEHR is primarily about solving the information interoperability problem in health, and it addresses all information, regardless of whether it is inside a system or in a message. (It does define some reference model semantics specific to the notion of ‘storing information’, mainly around versioning and auditing, but this has nothing to do with the main interoperability emphasis.)

To see that openEHR is about generalised interoperability, all that is needed is to consider a lab archetype such as Lipid studies in the Clinical Knowledge Manager. This archetype defines a possible structure of a Lipids lab test result, in terms of basic information model primitives (Entry, History, Cluster, Element etc). In the openEHR approach, we use this same model as the formal definition of this kind of information is in a message travelling ‘between systems’, in a database or on the screen within a ‘system’. This is one of the great benefits of openEHR: messages are not done differently from everything else. Neither is interoperability of data in messages between systems different from that of data between applications or other parts of a ‘system’.

Engaging Clinicians in Clinical Content (in Sarajevo)

Just browsing and found the link to our presentation for a paper the CKM team gave at the Medical Informatics Europe conference in Bosnia, 2009. Thought I'd share it here, in memory of an amazing conference and location:

MIE09: Engaging Clinicians in Clinical Content [PDF]

Our presentation:

[slideshare id=1958920&doc=engagingcliniciansinclinicalcontent-090906091020-phpapp02]

And the reference to Herding Cats is explained (a little) in the embedded video - actually an advertisement for EDS:

[youtube http://www.youtube.com/watch?v=Pk7yqlTMvp8&w=425&h=349]

I arrived after a 37 hour flight - Melbourne - London - Vienna - Sarajevo to join my Ocean CKM team - Ian McNicoll and Sebastian Garde. We presented our paper and also ran an openEHR workshop.

The conference was held at the Holiday Inn in Sarajevo - the hotel in which all the international journalists were holed up during the war, on 'Sniper Alley'.

Despite it being 14 years after the war had ended, raw emotions were palpable many times during the conference. That a pan-European conference was being held in his beloved Sarajevo under his auspice was a very overwhelming event for the Professor of Informatics and he was often seen in tears!

From my hotel room window in the centre of the city I could see 6 cemeteries.

Walking through the inner city we came across many cemeteries, thousands of marble gravestones with the majority representing the young men aged between 18 & 26.

Bullet holes were still very obvious on the walls of buildings.

Yet the city was vibrant, alive and welcoming.

I won't ever forget this visit.

Some photos of the experience:

DCMs – clarifying the confusion

Detailed clinical models are certainly a buzz term in the health IT community in recent years, commonly abbreviated to DCMs. Many people are talking about them but unfortunately, often they are referring to different things. The level of confusion is at least as large as the hype.I sincerely hope that this post helps to lift the veil of confusion just a little...

Anatomy of a Problem... a Diagnosis...

Problem or Diagnosis? Does it really matter?

Representation of the concept of a Problem or Diagnosis is a cornerstone of an electronic health record. Yet to date it seems to have been harder than expected to achieve consensus. Why is this so?

Games for Life?

"Anyone who sees a hurricane coming should warn others. I see a hurricane coming.

Over the next generation or two, ever larger numbers of people, hundreds of millions, will become immersed in virtual worlds and online games. While we are playing, things we used to do on the outside, in "reality," won't be happening anymore, or won't be happening in the same way. You can't pull millions of person-hours out of a society without creating an atmospheric-level event.

If it happens in a generation, I think the twenty-first century will see a social cataclysm larger than that caused by cars, radios, and TV, combined... The exodus of these people from the real world, from our normal daily life, will create a change in social climate that makes global warming look like a tempest in a teacup."

EDWARD CASTRONOVA, Exodus to the Virtual World (paraphrased in "Reality is Broken")

How do games work? Why are humans so drawn to games? What can they do for us in our real lives?

These are questions posed in just the first chapter of Janet McGonigal's recently published 'Reality Is Broken: Why Games Make Us Better and How They Can Change the World'.

On page 3...

"Games developers know better than anyone else how to inspire extreme effort and reward hard work. They know how to facilitate cooperation and collaboration at previously unimaginable scales. And they are continuously innovating new ways to motivate players to stick with harder challenges, for longer, and in much bigger groups. These crucial twenty-first-century skills can help all of us find new ways to make a deep and lasting impact on the world around us.

Game design isn't just a technological craft. It's a twenty-first-century way or thinking and leading, and gameplay isn't just a pastime. It's a twenty-first-century way of working together to accomplish real change."

And

"If we take everything game developers have learned about optimising human experience and organizing collaborative communities and apply it to real life... I foresee games that reduce our stress at work and dramatically increase our career satisfaction. I foresee games that fix our educational systems. I foresee games that treat depression, obesity, anxiety, and attention deficit disorder. I foresee games that help the elderly feel engaged and socially connected. I foresee games that raise rates of democratic participation. I foresee games that tackle global-scale problems like climate change and poverty. In short, I foresee games that augment our most essential human capabilities - to be happy, resilient, creative - and empower us to change the world in meaningful ways... Such games are already coming into existence."

"We need to build hybrid industries and unconventional partnerships, so that game researchers and game designers and game developers can work with engineers and architects and policy makers and executives of all kinds to harness the power of games.

Finally but most importantly, we all need to develop our core game competencies so we can take an active role in changing our lives and enabling the future."

I've observed family and friends of all ages engrossed (?obsessed) with game playing for as long as I can remember. I've even experienced it myself. Confession time: I was so engrossed in playing a game that I forgot to pick my kids up from school - they've never let me forget and that was over 15 years ago! However, it was a very powerful experience - absorption in a strategy game removing all sense of time or responsibility from my mind, complete focus on an alternate reality. I don't doubt the potential power of games...

For some time I've been pondering how to harness the gaming phenomenon to create positive outcomes and, in particular, its potential to improve health. I have been observing efforts such as Games for Health with interest. Now this book raises the possibility of the application of gaming to any/all facets of our lives. Where will it end? Is this a good direction?

Amusingly, my youngest son has proposed a new method to promote a fairer distribution of chores in our house - using Chore Wars to gain experience points, treasure etc as rewards for competing 'adventures' (aka chores). That's a great idea if all siblings are prepared to compete to win, but unfortunately for him his brothers quietly smile and are very happy to lose this particular game:) Close, but no cigar!

Anatomy of an Adverse Reaction

Discussion about how to represent some of the commonest clinical concepts in an electronic health record or message has been raging for years. There has been no clear consensus. One of those tricky ones - so ubiquitous that everyone wants to have an opinion - is how to create a computable specification for an adverse reaction. In the past few years I have been involved in much research and many discussions with colleagues around the world about how to represent an adverse reaction in a detailed clinical model. I'd like to share some of these learnings...

Making sense of 'Severity'

'Severity' is a pretty simple clinical concept, isn't it? I thought so too until I first sat down to create a single, re-usable archetype to represent 'Severity'. I soon discovered that I had significantly underestimated it - the challenge was greater than appeared at first glance.

The archetype development process is usually relatively straight forward - a brain dump of all related information into a mind map, followed by rearranging the mind map until sensible patterns emerge. They usually do, but not this time.

However, the more I investigated and researched, the more I could see that clinicians express severity in a variety of ways, sometimes mixing and 'munging' various concepts together in such a way that humans can easily make sense of it, but it is much harder to represent in a way that is both clinically sensible and computable.

The simple trio of 'Mild', 'Moderate' and 'Severe' is commonly used as a gradation to express the severity of a condition, injury or adverse event. Occasionally these are defined for a given condition or injury, however most often these are subjective and there should be concern that one clinician's mild might be another's moderate. Some also include intermediate terms, such as 'Mild-to-moderate' and 'Moderate-to-severe', but one has to begin to be concerned if these more subtle differences make the judgement even more unreliable, especially if we need to exchange information between systems.

Others add 'Trivial' – is this less than 'Mild'? By how much? Is there a true gradation here? And what of 'Extreme'? Still others add 'Life-threatening' and 'Death' – but are these really expressing severity? Or are they expressing clinical impact or even an outcome?

When exploring severity I found examples of severity expressed by clinicians in many ways. This list is by no means exhaustive:

  • Pain – expressed as intensity, often including a visual analogue scale as a means to record it ie rate from 0-10
  • Burns – describing percentage of body involved e.g. >80% burns
  • Burns – describing the depth of burn by degree e.g. first degree
  • Perineal tears – describe the extent and damage by degree – e.g. second degree tear
  • Facial tic – expressed in using frequency e.g. occasional through to unrelenting
  • Cancer – expressed as a grade, clinical or pathological
  • Rash – extent or percentage of a limb or torso covered.
  • Minor or major
  • Mild, disabling life-threatening
  • Relative severity – better, worse etc
  • Functional impact – ordinary activity, slight limitation, marked limitation, inability to perform any physical activity
  • Short or long term persistence
  • Acute or chronic

In practice, clinicians can work interchangeably with any of these expressions and make reasonable clinical sense of it. The challenge when creating archetypes, and other computable models, is to ensure that clinicians can express what they need in a health record, exchange that information safely with other clinicians, allow for knowledge-based activities such as decision support.

In addition, there also needs to be a clear distinction between 'severity' and other, related qualifiers that provide additional context about 'seriousness' of the condition, injury or adverse reaction. These sometimes get thrown into the mix as well, including:

  • Clinical impact (or significance);
  • Clinical treatment required; and
  • Outcomes.

Clinical impact/significance might include:

  • None - No clinical effect observed
  • Insignificant - Little noticeable clinical effect observed.
  • Significant - Obvious clinical effect observed
  • Life-threatening - Life-threatening effect observed
  • Death – Individual died.

Immediate clinical treatment required might include:

  • No treatment
  • Required clinician consultation
  • Required hospitalisation
  • Required Intensive Care Unit

Outcomes might include:

  • Recovered/resolved
  • Recovering/resolving
  • Not recovered/resolved
  • Fatal

Then there are those that recovered or the condition resolved but there were other sequelae such as congenital anomalies in an unborn child...

Severity ain't simple.

Over time, and repeatedly returning to it when modelling it in various archetypes including Adverse Reaction and Problem/Diagnosis, I've come to the conclusion that I don't think it is useful or helpful for us to model 'severity' in a single, re-usable archetype. It seems to work better as a single qualifier element within archetypes – then we can bind it in to terminology value sets that are useful for that specific archetype concept and for the use-case context.

When a clinical knowledge pattern is easily identified, creating an archetype is easy - the archetype almost writes itself! So over time I've learned not to try to force the modelling. Some archetypes 'work'; some, like this one, just don't.

Anatomy of an archetype

With this blog I want to establish a simple baseline statement or overview about openEHR archetypes, aggregated from other posts and publications – a reference point if you like – from which we can journey further and in more detail into the issues around clinical modelling using specific archetypes. Archetypes are a strong basis for data liquidity. Create the archetype once, agree through peer-review, re-use when and where required – the foundation for a 'universal health record'.

Definition:

Formal Definition

An archetype is a computable expression of a domain content model in the form of structured constraint statements, based on a reference (information) model. openEHR archetypes are based on the openEHR reference model. Archetypes are all expressed in the same formalism. In general, they are defined for wide re-use, however, they can be specialized to include local particularities. They can accommodate any number of natural languages and terminologies. [Archetype Definitions and Principles]

Clinician's 'lay' definition:

An archetype is a structured, computable specification for one single, discrete clinical concept. - much simpler!

Each archetype is a rich health data specification, defined and agreed by the clinicians themselves to ensure that each model is 'fit for clinical purpose'. Collectively these archetypes together can create an electronic health record 'lingua franca', or an example of PCAST report's 'Universal Exchange Language (although archetypes are not limited only to exchange of health information).

Archetypes can be used to model a range of clinical concepts:

Purpose

Archetypes are:

  • Potential 'agents for change' – allowing knowledge-driven EHRs as an orthogonal approach to EHR development.
  • The basis for a coherent and consistent approach for the whole continuum of clinical care and related activities:
    • Recording, storing and querying health information in electronic health information;
    • Exchanging health information - the broader the agreement & governance, the greater the potential for interoperability;
    • Data aggregation;
    • Knowledge-based activities; and
    • Comparative data analysis.

Using archetypes requires:

  • A change of mindset – no longer silos of data; no longer message and document driven.
  • Upfront planning & coordination in development of archetypes, but not necessarily more time
  • CLINICIAN LEADERSHIP & ENGAGEMENT
    • To ensure the quality of clinical data in our EHRs
    • To warrant the clinical data is safe & 'fit for purpose'

Design

Each archetype is designed:

  • As a maximal data set effectively everything one can think of about a clinical concept in all situations by the consumer and any and all providers. This design intent is pragmatically constrained to be sensible in some situations, however good archetype design will always ensure that the full model can be achieved through nesting of additional archetypes to create the maximal dataset/universal use-case ideal.
  • For the 'universal use-case' – for re-use in any and all scenarios, from use at home, to primary care, community care, hospital care, secondary use and for research.

Each archetype needs to represent the variety of data that is required to represent the information required for all aspects of clinical care and related activities, including:

  • The ability to capture both free-text vs structured data;
  • Normal statements such as "Nil significant" or "NAD";
  • Graphable data;
  • Images – eg a homunculus, or a surgical diagram;
  • Multimedia including photos, video and audio or an ECG wave format;
  • Questionnaires, checklists etc

Think about how the clinician may record the data: different clinicians will need differing approaches to recording their health information and the models need to allow for this clinical diversity. Some will prefer to use more free text and others more structure; some need more detail than others; we need to capture current requirements for recording and exchange while keeping in mind what might be best practice in the near and distant future.

The best designers, without a doubt, have a clinical background. I have seen clinicians and non-clinicians given identical clinical content, but the archetypes produces are surprisingly different. Clinician modellers definitely model their domain better than those who are not familiar with clinical practice. They are perhaps better able to envision the fractal nature of medicine and take that in to account in creating archetypes, especially the complex areas which require nesting of archetypes within each other to capture the depth and richness required in the models.

Integration with terminologies such as SNOMED, ICD or LOINC are strategic and important – the structure of an archetype is not enough to represent clinical information adequately, and similarly terminology alone is inadequate. However the combination of terminology within an archetype structure, by naming elements or provision of value sets, is semantically very powerful. There is still considerable work to be done to determine how to represent the information overlap – that which can be expressed either in the archetype structure itself or via terminology. This remains a 'grey zone' and is a pressing area for collaboration or research.

Archetype classes – support clinician's processes

There are four main categories of archetypes that are useful to understand – each corresponding to classes in the openEHR Reference Model.

1.  Compositions – which correspond to commonly used clinical documents, such as 'antenatal visit' or 'care plan'. 2.  Sections – these are effectively used to assist with human navigation within EHRs and correspond to document headings, for example 'antenatal examination' or 'summary'.

3.  Entries – these are the most common and are fundamental building blocks of EHRs. There are four main types of Entry archetypes:

  • Observations – recording measurable or observable data e.g. blood pressure, symptoms or weight;
  • Evaluations – recording clinically interpreted findings e.g. adverse event, diagnosis or assessment of risk;
  • Instructions – recording the initiation of a workflow process, such as a medication order or referral;
  • Actions – recording clinical activities e.g. procedure or medication administration. Actions complement the instruction and can record the ensuing state of the instruction, such as 'completed' or 'cancelled'.

4. Clusters

Lifecycle & Governance

The Clinical Knowledge Manager (CKM) – www.openEHR.org/knowledge - is an international, online clinical knowledge repository under the auspice of the openEHR Foundation that provides:

  • Access to a library of cohesive archetypes and related knowledge artefacts;
  • Peer review, life cycle management & publication process via a Web 2.0 approach for clinical content, terminology binding, terminology reference set binding and translations;
  • Clinical Knowledge governance underpinned by a digital asset management system and providing:
    • Provenance, audit trails, validation checking
    • Release sets for implementation

Archetypes are freely available under a Creative Commons license.

Use

The need for EHRs to be able to represent clinical diversity is absolutely underrated. Existing EHR systems corral clinicians into certain types of recording etc, but for safety we need more diverse ways for clinicians to record what they need for patient care, and also to ensure that what is exchanged is appropriate for the patient and their individual circumstances. A 'one size fits all' document approach to emergency summaries or messages can be potentially unsafe. Elderly, pregnancy, chronic disease, paediatrics are extremely common examples where there are specific additional requirements needed to summary data sets

Templates are computable specifications for a specific clinical scenario or use-case. The openEHR paradigm takes the governed and agreed archetypes from CKM and creates clinically useful specifications that can be implemented in systems by:

  • Aggregating the necessary archetypes together; and
  • Constraining the maximal dataset archetypes so that only relevant data elements are active.

Examples include all clinical content required for a specific Message, Document, Clinical consultation or Report e.g. a histopathology report, a referral, an antenatal visit or a discharge summary.

In this way we can, in colloquial terms, 'have our cake and eat it too'. At the same time, CKM ensures tight clinical knowledge governance, yet templates allow for expression of clinician diversity.

Benefits

Archetypes provide a 'glide path' to knowledge-level interoperability of health information. They can:

  • Bootstrap new application development – a source of agreed clinical content, avoid re-inventing the wheel for each vendor, organisation or project;
  • Provide a forward 'roadmap' for existing applications – supporting gradual transition from proprietary silos to common data representations, perhaps via mapping to common messages as an interim step;
  • Support data integration – the means to migrate valuable silos of legacy data to a common, re-usable representation
  • Enable data aggregation & comparative analysis – the basis for ensuring 'apples' are 'apples', not 'oranges';
  • Simplify messaging – keep message content consistent with EHR content;
  • Enable knowledge-based activities eg Clinical Decision Support Systems – consistent & coherent; create once, re-use in all systems.

Power drills, Cardiologists and collaborative consumption

This week I watched Rachel Botsman's TEDxSydney talk: Collaborative Consumption. Rachel's premise is that we're "wired to share", and I particularly liked her illustration at the 10 minute, 30 second mark... ASSUMPTION #1: Most people own a power drill.

ASSUMPTION #2: Most power drills are used for a total of 12-13 minutes in their entire lifetime << seems not unreasonable.

NEED: We actually need a hole, not a drill!

CONCLUSION: Either rent a drill from someone else, or rent yours to everybody else.

WHAT IS THE END GOAL?: Share the resources better!

[ted id=1037]

While visiting Brazil last year , I learned that in São Paulo if you get referred to a Cardiologist, you don't get to choose which one, you just get sent to one. That doesn't sit well with me, especially as a clinician, and I'm pretty careful to whom I entrust my care, or my family's care.

Yet apparently the waiting time is down to trivial time frames. People are actually getting treated. That is significant!

Apparently resources are being used better, simply because of a central booking system and an algorithm matching clinician with patient and location.

When you start to add in some of the benefits from eHealth such as potential shared EHRs, this starts to make more sense. I will still struggle with the lack of choice, but if patients are being seen more efficiently... then maybe it is a good, or even better, thing.

Stop for a moment and consider:

If we want health reform...

If we want more efficient use of resources for our health $$$...

... then we need to think of how to better use Health IT to support the notion of collaborative health consumption - both of resources & information. How can we better match available resources with need? How can we enable consumer choice at the same time?

There's a tension there - I can foresee many issues, but also opportunities.

I have more questions than answers.

Very interested in your opinion.

Is an incremental approach to EHRs enough?

Attending the International Council meetings on the first day of the recent Sydney HL7 meeting, the resounding theme was that of many countries focussed on messages, documents and terminology as the solution for our health IT future; our EHRs; health interoperability. 19 countries presented and 18 repeated this theme. The nineteenth HL7 affiliate, Netherlands, stood out in splendid isolation by declaring an additional focus on the development of clinical content.

Incremental EHRs?

This common message/document/terminology approach to EHRs is largely for historical reasons - safe, solid and incremental innovation; building on what has been proven successful before. It is not unique to health IT. It is not unique to HL7 either - it was just very obvious on the day.

In fact you can see this same phenomena in many places in everyday life - the classic example being the manufacturing of disposable shavers. First a one blade razor, then two blades, then three... But how long can this go on for? Is seven blades a reasonable thing? Eight? Will infinitely more blades make a difference to the shaving outcome or experience? When will blade makes stop and rethink their approach to innovation?

Certainly messages, documents and terminology are approaches that have commenced as often relatively isolated islands of work, had some successes, progressed and are now gradually being drawn together to create opportunities for some interoperability of health information. And don't get me wrong, there are absolutely some successes occurring. However, three questions linger in my mind about this approach:

  1. Is it enough?
  2. Is it sustainable?
  3. Will it achieve interoperable systems?

We all hear the rhetoric that if we can interconnect financial systems, then we can do it for health. We've tried this incremental approach for over 30 years and made significant progress but we definitely haven't cracked it yet.

Messages

Paul Roemer used this diagram (right) to illustrate his view of the messaging exchanges in his blog post. I think the image is powerful, imparting both the complexity and chaos that an n-to-n messaging dependency will likely entail. The HIE approach might help, but ultimately a national approach to messaging will result in multiple images like this trying to connect to each other.

This is further complicated by messages often taking up to a year to negotiate between the sender and receiver, or potentially longer to successfully negotiate the national or international standards processes, and even then the resulting 'standard' is often being subjected to individual tweaks and updates at implementation in real-world systems where the 'one-size fits all' message doesn't meet the end-user's requirements.

In Australia the 'standard' pathology HL7 v2.x message is anything but standard with each clinical system vendor having to support multiple slightly different versions of the 'standard'. The big question is whether this is sustainable into the future.

In the US, the clinical content payload for the Direct project is yet to be determined and defined - "The project focuses on the technical standards and services necessary to securely transport content from point A to point B, and does not specify the actual content exchanged." This rings alarm bells for me.

I've been told that at one relatively large hospital system in the US, they require 30 permanent staff just to maintain their messages...! Whoa!!!

Messages and message networks try to be a simple incremental solution to interoperability. But are they really that simple? Are they sustainable? Locally, yes. Regionally, yes probably. Nationally? Internationally???

<My head hurts>

Documents

CDA documents featured highly on the HL7 agenda, with the current emphasis on simple documents, and increasingly we will welcome the inclusion of structured clinical content detail. Currently the content is HL7 v3 templates, but how will that content be standardised and support semantic interoperability?

Werner Ceusters describes semantic interoperability as:

Two information systems are semantically interoperable if and only if each can carry out the tasks for which it was designed using data and information taken from the other as seamlessly as using its own data and information.

Without working towards standardising clinical content we run the risk of our EHRs being little more than a filing cabinet full of unstructured documents that are human readable but not computer processable.

Terminology

We all need terminology, no matter which approach. That is an absolute given.

However the stand-out question that impacts all approaches is how to best 'harness' the terminology...

Interestingly the openEHR Board and the IHTSDO Management Board (SNOMED CT) have been talking about joining forces. An announcement on the openEHR site, in December 2010, states:

"At its October meeting in Toronto, the General Assembly of the IHTSDO received and discussed a proposal, submitted by its Management Board, to support, develop and maintain the IP in openEHR, within a broader framework of IHTSDO governance for clinical content of the electronic health record.

The openEHR Foundation Board has now heard from the IHTSDO Management Board, saying that, whilst the objective of the proposal was considered by the GA to be within the scope of the organisation and that it represented a pressing issue for their governments, it was unable to reach consensus that going forward with openEHR in this way is the right choice, at this time."

This suggests that the IHTSDO Management Board identified significant value in combining the structure of openEHR archetypes with SNOMED, enough to propose a stronger relationship. It will be interesting to see if these discussions continue to progress.

My view - this would be a game-changer. A common approach to using archetypes and SNOMED together is a potentially very powerful semantic combination.

An orthogonal approach - knowledge-driven EHRs

By contrast, the orthogonal approach taken by ISO 13606 & openEHR starts with computable, standardised clinical data definitions at the core - representing the clinical content within an electronic health record. The data definitions, comprising archetypes  plus terminology, are the key. Messages and documents are derived from these content specifications and will therefore be internally consistent with the EHR data from which they were generated and if received into an archetype-enabled system can be incorporated, as described by Werner's definition above, as though generated in the receiving system ie semantic interoperability - no data transformation required.  The openEHR communities of clinicians, informaticians and engineers are collaborating to agree and standardise these archetypes as a common starting point.

Working from standard content definitions will potentially make the development of documents and messages orders of magnitude simpler. If archetypes have been agreed and published via the CKM collaborative process, then engineers will be able to utilise these as building blocks for the creation of messages and documents for specific use cases, and for a multitude of other technical outputs.

The way forward?

Returning to the multi-blade EHR idea...

When will we stop, regroup and assess the merit of continuing as we are? Who will draw a line in the sand?

Difficult to answer.

Maybe never, maybe no-one.

openEHR/ISO 13606 may not be the right or final answer, but it does provide an alternative and orthogonal approach that has merit and is worth consideration.

Hopefully some of outcomes/proposed discussions from the recent HL7 meeting in Sydney will also contribute to clarifying a way forward.

Perhaps, even yet, we will devise a truly innovative approach to solving the difficulties of developing EHRs.

Adventures of a clinician in HL7!

I've just survived my first HL7 meeting, although amused colleagues tell me that I may not fully recover! ‘Clinician newbie’ to HL7 though I was, for my sins I am becoming increasingly drawn into work within other standards organisations, and my clinical work is no longer with patients but in working with clinicians to develop clinical content models for use in electronic health records. I have some experience of the world of health informatics.

Held relatively nearby in Sydney, it was too good a chance to miss attending an Australian HL7 meeting, and it was an ‘interesting’ experience. The meeting was certainly very casual with a plethora of geeky T-shirts and pony-tails – definitely not the norm at the ISO meetings I’m more familiar with. Some defied this stereotype and still wore their business suits, despite jet-lag and it being a weekend – there are definitely certain individuals that I’ve met over the years who I can only believe must paint the fence in a suit... I particularly remember a certain medical registrar that I worked with many years ago, who always turned up to a resuscitation in Emergency in the middle of the night with a tie perfectly knotted – go figure! But I digress...

I’m told that as the meeting was held outside the US, there was a different flavour of attendee – certainly many from Australia and New Zealand took the opportunity to attend for the first time. I gather many ‘regulars’ didn’t attend and so many of the clinically-related working groups did not meet at all, which was rather disappointing. I spent time in the Patient Care working group and attended the Clinical Interoperability Council. Unfortunately I’m still rather clueless about the remit of the CIC... for clarification in the future perhaps, but clinician-driven approaches to EHR development is a critical way forward, in my opinion.

There was certainly an emphasis on education at this meeting, and it appeared to be very successful with many first-time attendees getting involved, including myself. I attended the Introductory and Advanced tutorials on CDA.

Some highlights:

  • Without a doubt the absolute highlight was non-work related - the evening ‘networking reception’ held during a sunset cruise on Sydney Harbour (see the photo above). Rather spectacularly, the organising committee somehow arranged for some rather large yachts to breathtakingly race around us just before sunset. Brilliant!
  • Sharing a beer, or three, with Keith Boone. I was chuffed when he blogged that he was coming to meet me! And I’m pleased to report that I seemed to have some impact on him after showing some of our collaborative work happening on the openEHR CKM.
  • A refreshing open-mindedness towards our work in openEHR. I particularly like the symmetry – I attended Bob Dolin’s Advanced CDA tutorial, and Bob attended our openEHR sessions! We do have potential to learn from each other.

The outcomes:

I've definitely been encouraged by some of the HL7 meeting outcomes with respect to openEHR. There was definitely a different attitude towards exploring openEHR/HL7 collaboration:

  • A DCM feasiblity demonstration project has been proposed - with input from the Patient Care, Models & Methodology and Templates groups.
    • Start with archetypes as the base
    • Establish adornments that map these to the V3 Ontology (Structured Vocabulary and RIM)
    • Create tools that then consume these to produce useful HL7-V3 artifacts (templates, or such)
  • An afternoon was spent discussing openEHR & RIMBAA - focussing on the commonalities between openEHR and HL7 RIMBAA implementation issues
  • The opportunity to provide tutorials on openEHR within a formal HL7 meeting – the previous attitudes have been more confrontational than collaborative! Which approach will prevail? Rather than how can we learn from each other! The introductory session in the morning was focused on background and clinical knowledge management. The afternoon had a range of speakers with expertise in application of openEHR/ISO 13606 in the HL7 environment

For the future:

  • I'd love to have a chance to engage with the Clinical Interoperability Council - to explore how we can collaborate across technical approaches so that at least as clinicians we can ensure that the clinical content in our desktop EHRs is consistently represented, high quality and 'fit for purpose'. The DCM feasibility project outcome will heavily influence if and when this could productively occur.
  • As clinicians we have to ensure our access and input to these technical processes, otherwise we won't end up with systems that support us to provide care to our patients. And I challenge the standards bodies to 'demystify' the technical - to remove the technical barriers that prevent grassroots clinician input and restrict participation to those very few who can bridge the world of software engineering with clinical practice.

Other posts from the meeting:

  • Rene Spronk: Ringholm - HL7 and openEHR are cooperating (finally)
  • Keith Boone (@motorcycle_guy):
    • Triplets
      • @motocycle_guy: @omowizard An archetype, a DCM and a template walk into an #HL7WGM event, stroll up to the bar ... and the bartender says...
      • @omowizard: @motorcycle_guy ...identical triplets I presume! #HL7WGM
    • Convergence - regarding templates, detailed clinical models and archetypes

Don’t re-invent the (clinical content) wheel...

It was with great interest that I read about the the recommendation for a universal exchange language in the recent release of the US report to the President: REALIZING THE FULL POTENTIAL OF HEALTH INFORMATION TECHNOLOGY TO IMPROVE HEALTHCARE FOR AMERICANS: THE PATH FORWARD. I had asked the Direct project about the existence of a national plan for standardising clinical content only recently... It appeared that here was a plan after all.

So, to the report. The approach and benefits proposed started well...

The best way to achieve a national health IT ecosystem is to ensure that all electronic health systems can exchange data in a universal exchange language. The systems themselves could be designed in any manner desired — they could accommodate legacy systems that prevail or new recordkeeping systems and formats. The only requirement would be that the systems be able to send and receive data in the universal exchange language. (p41)

I have previously blogged about a universal health record underpinned by an application independent library of clinical content definitions, so the intent and benefits are well aligned with my preferred approach.

But then alarm bells started to ring....

Because of its multiple advantages, we advocate a universal exchange mechanism for health IT that is based on tagged data elements in an extensible markup language. If there were another equally good solution, it should also be considered; we have collectively been unable to think of one. (p43)

Issue #1: Isn't it more appropriate for step one to identify the need for standardised clinical content as a policy, rather than specify the format up front? Isn't that really the domain of health informatics experts as part of a subsequent work plan? I feel like we've skipped a couple of steps in the decision-making process. And are they really advocating the creation of this metadata-tagged XML from a zero starting point?

Issue #2: The last 9 words of that paragraph, "...we have collectively been unable to think of one." I'm glad that they are still open to equally good solutions being considered as indeed there are many ways that individuals, groups and organisations are exploring how to standardise clinical content definitions as the basis for a universal exchange mechanism.

In ISO TC 215, the International Standards Organisations Technical Committee for Health Informatics, there is a new work item which has been evolving for at least 2 years, although yet to attain committee draft status, known as ISO 13972 - Quality criteria for detailed clinical models. This work item is targeting a new international standard for determining quality criteria about the development of detailed clinical models - all clinical models, pick your flavour! In the world of international standards it has been recognised for years that with the plethora of different approaches to developing clinical models for EHRs, there is a need for some criteria to support quality aspect in their development. This work is being led by modellers from the Netherlands, with experts participating from the Australian, Danish, German, Swedish, US and Canadian standards organisations. Creating clinical content is definitely not a new field of endeavour by the time it enters the international standards arena.

So, I am extremely surprised that this expert PCAST group have not been able to 'collectively think' of an existing alternative.

In my last blog - Clinical Knowledge Governance in a Web 2.0 world – I pointed to a number of approaches to standardised clinical content to support health information exchange.

1. In the US – including, but by no means limited to:

  • the HL7 standards organisation - where my UK colleague, Charlie McKay, informs me that there are more than 20 different approaches to clinical content development. Keith Boone (@motorcycle_guy) has posted his response to the PCAST report from a HL7 point of view - The Language of HealthIT;
  • Stan Huff's group at Intermountain Health in Utah have had extensive experience in defining standardised clinical content across all of Intermountain's systems – they are leading experts in this domain; and
  • I understand Don Mon and his team from AHIMA have also been working in this area.

2. In Europe, and Australia:

In addition, a few more points...

Firstly, the focus of the PCAST report is still only on data exchange, not on ensuring a sound foundation of a person-centric electronic health record. I'll say it again... get the data right and then the data will be able to be re-used, to multitask, be liquid, flowing to where it needs to be. It will become the solid foundation on which to build lifelong health records, simpler health information exchange, data integration & aggregation, research, reporting and knowledge-based activities. By focusing on exchange alone, then... you'll hopefully be able to exchange well and the rest will be considerably more uncertain.

Secondly, the proposed variant of XML is described as a 'straightforward' and 'superior' solution (p44), and the assumption that it will be scalable, protected by encryption, and that data element access services will be enough to support the health information exchange required. By contrast HL7, ISO/CEN 13606 and openEHR have taken decades to develop and refine underlying reference models to ensure that they have an unambiguous, consistent, secure way to represent personal health information – so you know who created the data, who is the subject of care, what the data means, what are the access rules applicable etc. In the openEHR environment, the specification authors developed Archetype Definition Language (ADL) for the purpose - and now part of the ISO 13606 standard - because the alternatives such as standard XML were not robust enough to represent health information. A 'straightforward' XML approach has a strong possibility of failure without a RM underpinning it.

And finally, there is the area of clinical knowledge governance itself. Health is dynamic, complex and diverse. The work required to represent healthcare as computable clinical content definitions or specifications is huge – don't underestimate the sheer volume of work that will be required. It is not realistic to expect a 'rapid mapping' of existing proprietary data structures into tagged data elements. Who will decide the clinical content in the models? If there are over 7000 clinical vendors in the US, which will be 'the source' or sources? Which are 'correct' or 'authoritative'? What methodology will be used to create the models? What level of granularity for each clinical element? How will they be aggregated together to represent clinical documents or events, and constrained to be useful for the clinical purpose? I have a million more questions...

Once the information models are defined, there will be a need for them to be validated before they can become the basis for a standardised or national clinical content library – suitable for consumers, clinicians, organisations, vendors, researchers and jurisdictions. A requirement will be recognised for life-cycle management and publication of these models, roadmaps for legacy data to migrate towards, and harmonise with, the new national health information 'source of truth', plus ongoing maintenance and governance.

Eric Browne stated in his recent blog, Recasting e-Health in the USA:

The work in Sweden, the UK, Singapore and even Australia, based on openEHR or ISO 13606 archetypes (i.e. implementable renditions of Detailed Clinical Models) is far more advanced and promising than that offered by the PCAST approach.

openEHR, which is my interest, has an approach to defining, agreeing and governing clinical content models for electronic health records, known as archetypes. It has taken more than 18 years to develop the openEHR technical specifications, and the last 10 years to achieve its' current approach and position in terms of clinical modelling. It is gaining traction, albeit with a modest volunteer community, especially now that it has a collaborative portal, known as the Clinical Knowledge Manager, to support sharing or models, reviews of clinical content, translation and terminology binding, and model governance.

Standardising health information definitions for health records or exchange is not a trivial task. Learn from what has already been achieved – all shapes, flavours and doctrines. Whatever you do, don't reinvent the wheel and create yet another universal language!

Clinical Knowledge Governance in a Web2.0 world

Establishing and maintaining the quality of clinical knowledge is clearly the domain of the expert clinicians themselves. This is a broadly accepted principle for management and governance of the traditional clinical knowledge artefacts. However this assumption needs re-evaluation when we need to establish quality, safety and ‘fitness for purpose’ of computable clinical knowledge artefacts that populate Electronic Health Record (EHR) systems. Clinical knowledge has traditionally been created and shared through formal publication and peer-review processes that have been adjudicated by committees of clinical experts. Those expert committees have been appointed through a credentialing process and have had jurisdiction and oversight over the entire publishable content – ‘the buck stops here’. Before the rise of the internet, face-to-face meetings have been where most of the committee work has been done, and the process has most often been slow and expensive but delivered good quality publications. The opportunity cost to each participating clinician has been high with recurring interruptions to their clinical activities. Revision of those publications at a later date repeats this process, taking considerable time, money and resources.

Certainly in recent times, there have been more electronic tools to support these processes – email, teleconferences and videoconferences have improved the logistics of the process, but essentially the process remains unchanged.

Given the increasing traction of electronic health records, there is a parallel movement to develop and share computable clinical content definitions that can be created, published and implemented by: multiple clinical disciplines; generalists and specialists; primary, secondary and tertiary care organisations; population health planning; clinical researchers; and knowledge-enabled systems such as clinical decision support applications. They need to be language independent and translatable, in order to transport health information across national boundaries.

These kind of computable clinical models need the input from many experts, clinicians and others, to ensure that they are not only clinically appropriate but support safe data usage in our EHRs. These models are increasingly being created with ambitious goals – to create once and then re-use many times. In this case, the scope of the models needs to include requirements of the full breadth of clinical professions and specialties. Clinicians remain key to their development and publication, but they also require input from:

  • Other domain experts – non-clinicians who will want or need to use these same models for non-clinical purposes such as secondary data use;
  • Informaticians – who understand how these models will be the basis for recording health information, exchange between systems, reporting, data aggregation and how knowledge-based activities.
  • Terminologists – to ensure that the models will integrate with appropriate terminology value sets;
  • Technicians – who will advise on the technical impacts of these models in systems; and
  • Translators – who will ensure that the clinical information is faithfully transformed from one language to another.

Examples of these computable clinical content models are many and varied. There are open source and proprietary models of many different flavours and philosophies – archetypes, templates, detailed clinical models etc. In recent years there are increasing attempts to broaden the input to the creation of these models and even to start to standardise them – regionally, nationally and even internationally. In this new paradigm, the traditional approaches to clinical content development, management and governance are no longer sufficient.

When the full breadth, depth, and dynamic nature of clinical knowledge is considered, it is not feasible to be able to appoint an overarching committee or board who would be capable of providing final ‘sign off’ about the clinical ‘correctness’ for any one model. Each clinical knowledge model will require input from varying groups of expert clinicians, terminologists, informaticians and technicians, depending on the clinical knowledge artefact under review. We need to find innovative approaches to online and asynchronous collaboration of a wide range of individuals from diverse backgrounds, expertise and geographical location to ensure these models are suitable for use in clinical systems.

Traditional standards bodies, such as ISO, CEN or HL7 have well defined and fixed processes in place for managing the lifecycle of technical standards through a formal balloting process with registered member bodies. These are definitely not suitable for managing and governing an evolving and dynamic clinical content specification library.

There has been some early work on establishing abstract archetype quality criteria by QREC and more recently, ISO TC 215 Working Group 1 has established a new work item 13972, which is establishing “Quality criteria for detailed clinical models”.  However, neither of these are able to establish the quality of archetype instances for real world use.

I believe that HL7 is working to establish a Template Repository. As I understand it, it will operate as an indexing service to templates that will be stored on distributed servers. Others may be able to provide more details.

Other work is no doubt occurring, of which I am not aware. And of course, each clinical system has to establish the clinical content that it will use in its own proprietary information model. In the US alone, with thousands of clinical software vendors, this means that we have thousands of different computable versions of essentially identical clinical content, but none of it interchangeable without mappings or transformation – what a huge waste of resources! We need to change this blinkered way of thinking.

The openEHR Clinical Knowledge Manager (CKM) is the only online clinical knowledge resource, to my knowledge, which is supporting collaboration by clinicians, other domain experts, informaticians, technicians and translators to achieve consensus about quality and safety in clinical content models – in this instance, openEHR archetypes.  I am directly involved in the development of this tool, and am active as an Editor facilitating the review process of the archetypes – I have described it in previous blog posts.

While CKM is one of the early Web2.0 approaches to collaborating about clinical content models, I am sure there will be more over time. I have spoken to a number of Knowledge Management experts, and to my surprise no-one has yet been able to point me to similar tools, resources establishing quality within a Web2.0 environment. Are we really such pioneers? Surely there are similar approaches in other knowledge domains?

No matter. There is no doubt that we are only in the early stages of a transformation in clinical knowledge governance and we have a lot to learn about how to establish quality criteria in a Web2.0 environment. I’ll post some thoughts in my next post...

The Swedish vs US approaches: health information, not just communication

National approaches to eHealth program development have been varied, and unfortunately we can all probably identify more failed attempts than successes. Just recently we have seen a spectacular regrouping within the UK NHS in the face of cost blow-outs and underperformance - a stellar example of how a project commonly described as the biggest computing exercise in the world, outside the military, has hit the proverbial brick wall. My question is not so much "What is the 'right' approach?" but more importantly "What is a sustainable approach, so that our valuable health information will persist at least as long as we do, and hopefully as long as our children need it?" Brian Ahier (@ahier) triggered some thoughts when he sent me a link to the new O'Reilly Radar article, Healthcare communication gets an upgrade which he co-authored with Rich Elmore and David C. Kibbe. It is an informative and succinct post that explains the US Direct Project (formerly NHIN Direct) in words that actually make sense to this non-technical clinician! No doubt it is good infrastructural work that will be very valuable.

However, given my day-to-day work with archetypes and standardising clinical content in Europe and Australia, the 'elephant-in-the-room' for me in this article is what is the clinical content that is to be exchanged? I assume we are talking about structured clinical documents as CDA, and similar. And that is great for short- to medium-term gains but, in the longer timeframe, as we want to share more data, and more complex data, and then we also expect to be able to compute upon that data rather than just read it... what then? How do we make sure that the data is safe and exchangeable between disparate systems? What is the long-term plan? Is a structured document enough? I think absolutely not.

So "What is the plan", I asked today of the authors and the Direct Project. I watch with interest for a response. Is there a strategy for content standardisation?

My understanding of the current situation with content development in the US?

  • Stan Huff's team has done fantastic work at Intermountain Health - a very sophisticated approach, but only used within their group;
  • UK's Charlie McKay tells me that there are currently at least 20 different approaches to clinical content development within HL7 alone that he can identify - a rather unfortunate situation for a standards body;
  • Think of all the clinical application developers in the US alone - more than 7000 vendors I'm told. Even if it is only hundreds, imagine the resources poured into developing the same clinical content in different ways for each separate application's proprietary information model, each reinventing the wheel and only interoperable by accident - what a waste of resources;
  • and then a plethora of others, including Tolven etc.

I have not been able to identify a cohesive approach to defining clinical content such that it is safe to be directly exchanged between clinical systems - am I wrong?

By contrast, only last week there was a series of emails on the openEHR Clinical discussion email list. It was offering a number of links to Sweden's recent very public meeting where they invited experts from throughout Europe to share the Swedish national eHealth program and plans, and to receive feedback from their EU colleagues. Taking a totally orthogonal approach, the Swedish priority is identifying clinical/business processes and getting data right.

Tony Shannon, Chair of the openEHR Clinical Review Board, triggered the discussion with this blog post - Sweden's noble eHealth strategy. Further emails elucidated links to all of the Swedish presentations from the Lund meeting, held on November 2, 2010, which I have included below.

However it was Sam Heard's email in that thread which summed up my concerns about the aforesaid 'elephant' and ensuring that health information is recorded consistently and safely. I particularly like the way he has described the problem, and his suggested solution. With his permission I post it here (with my emphasis)...

I was not able to get to Lund unfortunately - but it is clear there is some good work going on. In Inger Wejerfelt's introduction it was good to see the statement:

"Swedish national decision: Standards with focus on the information and not the communication only."

This is a critical shift and one that needs more attention. I cannot help but think we are stuck in a world where everyone thinks that health records can be whatever anyone dreams up and then they will interoperate. It is like the first few years of word processors and we are agreeing on what words are and paragraph breaks. We still have our own character representation and we are agreeing on sending paragraphs to each other to share. It is hard work.

Companies have made a lot of progress, Vista is open source and has a lot in it and GE's collaboration with Intermountain has the benefit of Stan Huff's insight. But health records are too important to do in a closed way or tied to a particular technology. Peter Fleming, head of NEHTA, has said "The Best of Breed is not an option" meaning we are not looking for good software that everyone can use. That is why I work in the openEHR community; so we can get on with this together and create a record that actually delivers what health care needs while allowing maximum diversity in applications.

A risk we might fall into is trying to do everything at once - this is not possible as 'everything' grows quickly from the previous everything. At this stage in developments the range of new possibilities transforms every year or less. So we need to decide on the first piece and implement it widely. For me that has always been the health record - concentrating on what has to be recorded. It will not do everything that is required but it will keep a record of it. Then we can then get smarter and do more.

The openEHR ACTION class embodies this divide between the model of care (workflow) and a model of recording. The care pathway steps are not actually the steps but rather the things that a clinician might record in following an instruction (or just doing it). People I am working with want to consider the workflow and transitions but the fact is that these all happen in the real world and do not get recorded consistently. When something _is_ recorded it is essential that we capture the state of the (possibly virtual) order. This means we have an idea of what is outstanding, ongoing etc.

We need more Swedens, more countries willing to think about the health record as possibly the most valuable resource. We need more people working on the shared logical specifications based on shared implementations.

...

Cheers, Sam

In Europe CEN 13606, now ISO 13606, has been mandated as the standard for EHR extract. A significant part of this standard is about the standardised definition of clinical content using archetypes. openEHR can be thought of as a superset of the 13606 standard, focused on the whole EHR, not just the extract.

    In particular:

  • the —UK, Singapore, Danish and Australian national eHealth programs are gaining momentum in determining EHR clinical content definitions.
  • the international openEHR community is gaining traction - clinicians defining clinical content through archetypes, and using the openEHR Clinical Knowledge Manager as a library and for life cycle management/peer review of archetypes, templates and terminology reference sets. Archetypes have been translated from English to Japanese, Chinese, Russian, Farsi, Dutch, German, Portuguese & Slovenian. They are also being created in other languages & translated into English. We are sharing clinical content models across international borders!;
  • applications based on shared archetypes are being built & implemented in Netherlands, UK, Australia, Brazil, Slovenia & the US;
  • professional clinical colleges are starting to drive archetype development in Australia & US to ensure that the EHRs they use are safe & 'fit for purpose'; and
  • Sweden's national approach is focused clearly on identification of the clinical business processes and defining clinical content through openEHR archetypes.

The Swedish approach is well documented and publicly available. The following presentations were given by key figures within the Swedish eHealth program at the international meeting held recently in Lund on November 3, 2010 (courtesy of Rikard Lövström):

  1. V-Tim - Inger Wejerfelt, Chief Health Informatics Officer, CeHis A presentation about V-TIM – the applied information model that is a framework for the “content and context” described for a clinical perspective and a result of national and regional projects.
  2. What are we trying to achieve? - Nils Schönström, MD PhD, Senior Advisor CeHis
  3. The Swedish eHealth approach - Dr Karl-Henrik Lundell
  4. The Swedish eHealth approach- Dr Håkan Nordgren, Head of National e-Health architecture board CeHis, MD .
  5. openEHR in Sweden and beyond - Thomas Beale.
  6. SNOMED CT - concept model attributes, archetype integration and terminology binding made by Jessica Rosenälv
  7. The reference archetypes - Helene Broberg
  8. Interdisciplinary Terminology for Health Care and Social Care in Sweden - Bengt Kron.
  9. Business driven archetype development - Jessica Rosenälv

The US approach gets lots of airplay because of the sheer volume of eHealth in the US in every dimension. However, in my humble view, it has significant limitations when it is so strongly focussed on the messaging and communication, at the expense of the health information itself. In a few years when the messaging paradigm is no longer sustainable, what then?

Addressing the issue of standardising EHR content will make exchange of health information simpler by orders of magnitude - drawing data directly from the EHR itself and being incorporated directly into the receiving EHR itself at the other end, ready to be computed upon without any interpretation. This is a foundation requirement for safe exchange of shared health information record, for data aggregation, for secondary use of data and for knowledge-based activities such as Clinical Decision Support.

Messaging hubs alone will not sustain the EHR revolution! Consider the other approaches that are also proving successful, especially that of the Scandinavians who are widely regarded as being the world leaders in this eHealth environment - they haven't got it perfect yet, but it is worth watching them very, very closely...

Records, the universe and everything

Here is a surprising gem that I found this morning - "Records, the universe and everything" is part of a presentation entitled "Where is THE medical record?" given by David Markwell to the Primary Health Care Specialist Group of the British Computer Society back in September 1996.  Enjoy...

Carrie Byte sifted through the envelopes on her desk. She had already searched the practice computer without finding anything of interest. Her face was a picture of tired resignation. She was about to see Arthur Dent. She had never met him before, but her senior partner had once told her something about him. She could not remember what it was, but she did recall it was unusual. Without the record she would be at a serious disadvantage.

She lifted the phone and dialled.

"Hello, George. Mr Dent's record doesn't seem to be here."

George apologised and said he would look for it. As Carrie hung up the phone, there was a knock at the door and, almost immediately, in walked a nondescript man.

"Hello, I'm Arthur. I expect that you are Dr Byte."

Carrie agreed that she was, asked what she could do for him, and apologised that his record was missing. He smiled and said that he had been away for a few years.

"Your record is probably with the FHSA", she suggested.

"Perhaps the Vogons mislaid it when they destroyed the planet", he countered.

"What do you mean?" she asked.

"I admit the FHSA is more likely, but never dismiss the improbable."

"Good advice for a doctor - but tell me what I can do for you today", Carrie said, trying to regain control.

The smell of coffee percolated through the door and she wanted to get this consultation finished. After another knock at the door George entered, carrying three thick Lloyd George envelopes.

"I found it, Doctor."

Arthur noticed that Carrie looked more downcast by the size of the records than she had been by their absence.

"Bit of a trilogy, isn't it, Doctor?" he quipped.

She smiled and he continued "Don't panic! I only want to know the results of my last hospital visit. Your partner, Morris Oxford, referred me for an opinion."

Ten minutes later she had waded through letters, printouts from other practice computers and reams of notes and established that the hospital letter was missing. It took ten more minutes for George to find it in Dr Oxford's in-tray.

As Author left the room he turned and said "Just one more thing, Doctor." Her heart dropped.

"I have a friend who has really good ideas about piles of paper. He'd help you find the right information more easily and have time for a coffee break."

"Oh yes", she replied sceptically.

"Yes", he said emphatically. "You must meet him. One o'clock in the Frog and Sparrow, and bring a towel."

He spoke with such authority that she felt obliged to join Arthur and his friend at the appointed time. Arthur introduced Mr Prefect who frowned as he greeted her and said "No towel. You've forgotten the towel. Oh well, I suppose it doesn't matter."

He took from his pocket a small box with the words "Don't panic!" written on it. "This", he said, "is what Arthur was telling you about."

By the end of lunch break she was convinced. A week later her practice had installed the Instant Transfer Computerised Heath Hyper Record. At the heart of this system, known as the ITCH-Hyper Record, was the concept of the improbably distributed record. Few people understood it, but that didn't seem to matter. It promised an end to 'missing record' misery and it really worked. Within three months, every practice, every hospital and every community unit in the country had installed the ITCH-Hyper Record. Clinical information entered anywhere was seamlessly shared by everyone, subject to necessary and appropriate security constraints. The age of information sharing had arrived, dreams of patients outshining paper became reality, the wood was seen, the trees were spared and crocks of gold appeared under every rainbow.

Six months later Carrie started to worry. She was looking at a record which showed a high blood sugar result two days previously. She switched screens to check the advice on her decision support system. She returned to the record and the blood sugar was normal. The first time it happened she assumed she had misread the screen. The next day it happened to another patient's record, and then another. She queried it with the lab. They said there had been some errors in a batch of tests but insisted they were corrected immediately. Then why, she wondered, had she seen the erroneous results nearly a week later? Soon the medical press were full of stories of similar incidents. Ford Prefect explained by saying "The old record is cached and not refreshed until it is re-accessed. It's just a glitch: we'll fix it."

The next problems Carrie noticed were intermittent gaps in records. These were accompanied by ubiquitous "Don't panic!" messages accompanied by icons depicting sporting events. Enquiries to the support desk were met with the patronising response and brief explanation that the tennis racket meant a 'service fault', the rugby ball a 'line out' and the golf club 'open links'. These errors became more common and it was inevitable that before long the system acquired the less flattering nickname "The Glitch".

Just over a year after Carrie's meeting in the Frog and Sparrow, she was the defendant in the first court case in which two incompatible versions of a patient's ITCH-Hyper Record were presented as evidence. Both carried all the marks of authentication as the genuine record for the same patient. The judge asked for explanations. Ford and Arthur attempted to explain about caches, probability, post-dating, modification, repudiation and the importance of towels for galactic hitchhikers. The judge was unimpressed and asked a simple question: "Where is THE medical record for this patient?"

Ford asked for clarification and the judge obliged. "Where are the original records of the clinical information recorded by each of the health care professionals involved in the care of this patient?"

Ford regarded the judge as a child who wanted to open a television to look for the people inside. He made a contemptuous remark and was silenced.

Arthur came to the rescue. "It's like this, I think. It's sort of everywhere. Well, everywhere in the cyberspace defined by thirty thousand clinical systems in the UK. That is, it could be anywhere, but nobody knows where, and of course, nobody really needs to." The judge was silent for half a minute. During those thirty seconds several dolphins spontaneously appeared, unnoticed, in the courtroom. Then the judge spoke very slowly and deliberately: "I need to know and I need to know now. Otherwise, how can I determine which of these pieces of so-called evidence is genuine?" He paused, waiting for a response. As the silence continued, Ford and Arthur vanished, having hitched a lift on a passing Dolphinian space trawler. The silence resumed. The dolphins smiled and vanished, still unnoticed.

Then the judge summed up. "It is claimed that the medical record is everywhere. That it is distributed across what has been called cyberspace. Yet, since we find that in two places it differs, from a legal perspective there are at least two versions. Neither of these can be located and nor can they be separated or distinguished from one another, except by the factual differences in the statements they contain. It follows that I am unable to determine the legal status of either. Therefore, for the purposes of this court, I rule that this patient's medical record is located nowhere. Since something that is nowhere does not exist, I further rule that it is inadmissible in this court. This being the case, can anyone present any evidence that will help this court to reach a judgment?" There was another pause while the judge and others noticed the absence of the expert witnesses. He smiled and added: "I presume that Messrs. Ford and Dent are now distributed in a manner similar to their records. Is there a shred of evidence to assist us with this case?"

Carrie passed forward a page torn from her diary on which she had made a few rough scribbles. Her counsel rose. "If it please your honour, I have here a piece of paper written on by Dr Byte at the time. It appears to corroborate her account." The judge, having studied the paper, passed it to the plaintiff's legal team and asked if they offered any further evidence. After a hasty conversation with their client, they indicated that no more evidence would be offered and the case was dropped.

The profession was not always the beneficiary of this legal precedent. Patients who made their own rough notes immediately after consultations won otherwise absurd cases against doctors unable to cite admissible medical records. Within a few weeks most GPs were reopening their filing cabinets, brushing the dust off their pens and relearning elaborate hieroglyphics. The ancient art of doctors' writing was reborn. Years later, when Carrie retired, she cleared out her desk and found a perfect glass replica of a 3.5" computer disk. She didn't know where it came from, but it reminded her of the heady days of her youth when she really believed paperless practice had arrived.

Interesting to see a summary of the conference agenda and links from the proceedings still online and active.

Can clinicians agree?

There have been a number of robust discussions in recent weeks around the claim that clinicians achieve consensus around computable definitions for clinical content. All discussions have been with MDs, some with substantial experience in various international standards organisations. All have been extremely sceptical...

The first challenge: "—What is a heart attack?” - and the response, —“5 clinicians, 6 answers” – and they are probably very accurate!

The second: —“What is an issue vs problem vs diagnosis?” - I'm told that this has been an unresolved issue in HL7 for 5 years+!

—And the third, from an obstetrician: “The midwives want all this rubbish that we don’t” - perhaps an unfortunate way of expressing the absolutely correct need for different clinicians to have screens presented that are relevant to the patient and the immediate clinical task at hand. —Different clinicians have different needs.

—However these are all essentially HUMAN PROBLEMS! Issues about communications, synonyms, value sets, screen display/layout. IT will not solve these issues - as always, the clinicians need to work out the issues amongst themselves. So where CAN we achieve consensus?

Within the openEHR Clinical Knowledge Manager environment we are gaining some traction in achieving clinician agreement to the structure required to define clinical concepts as archetypes – the ‘first principles’ of clinical concepts, if you like. The approach is inclusive of everyone's needs and requirements, rather than requiring an arbitrary decision on the minimum data set or a priority data set - so we aim, as best we can, for a maximum data set.

For example, the framework to express a Diagnosis is largely not contentious:

  • Diagnosis name
  • Status
  • Date of initial onset
  • Age at initial onset
  • Severity
  • Clinical description
  • Anatomical location
  • Aetiology
  • Occurrences
  • Exacerbations
  • Related problems
  • Date of Resolution
  • Age at resolution
  • Diagnostic criteria

And for a Symptom:

  • Symptom name
  • Description
  • Character
  • Duration
  • Variation
  • Severity
  • Current intensity
  • Precipitating factors
  • Modifying factors
  • Course
  • Aetiology
  • Occurrences
  • Previous episodes
  • Associated symptoms

This notion of achieving clinician (and other domain expert) consensus and standardisation of clinical concepts is a major focus of the openEHR archetype work.

Bear in mind that while agreement can be achieved on the clinical content structure, this is only the first step in ensuring that clinicians are able to enter, retrieve and exchange meaningful clinical data.

So if we can achieve consensus around these archetypes, do clinicians then have to agree on a standard Discharge Summary or Antenatal Record or Report? The answer: only if is useful to do so. Clinical diversity can be allowed if the archetype pool is stable & governed/managed. By tightly governing the archetypes at international or national level, these are effectively the common 'lingua franca' that enables sharing of health information.

Template creation is the next openEHR layer - these aggregate the archetypes together to represent the requirements for a specific clinical activity and then constrain them down from their fully inclusive state to something that is 'fit for use' for a given clinician in a given clinical situation. Terminology subsets are also integrated in appropriate places into the archetypes (sometimes) and templates (usually) to round out the expressiveness needed by computable clinical models in clinical care - neither the structure nor the terminology can do this in isolation.

Once a critical mass of these archetypes are published we will be able to support a breadth clinical diversity across eHealth projects - the need for rigid, inflexible messages and documents to support any health information exchange has been largely overcome. Clinicians will only need to agree these types of messages or documents where there is a measurable benefit for doing so, and at all other times they can focus on ensuring that they express their archetype-based clinical records as flexibly as they need to for patient care. Sure, the human factors remain unresolved - but not even the most perfect EHR will solve these issues!

So can clinicians agree? Yes! It is happening in many areas related to data structure, creating a solid framework for our electronic health records. Archetypes are the tightly managed building blocks; templates enable safe and flexible expression of clinical and patient records - allowing the best of both worlds, both governance AND clinical diversity to flourish... and the clinicians are finally able to actively participate in shaping their EHRs.

Clinician-led eHealth records - a knowledge-enabled approach

My presentation given to W.H.O. in Geneva last week... [slideshare id=5492832&doc=20101008whoclinician-ledehealthrecords-101019133425-phpapp02]

An orthogonal question...

Just askin'. Just curious... What single eHealth activity, process or solution now available could:

  • Ensure that EHR data is safe and ‘fit for clinical purpose’;
  • Support data integration, data aggregation & comparative analysis;
  • Simplify and support messaging and data exchange;
  • Enable co-ordinated knowledge-based activities; and
  • Provide a clear transition path for existing EHR applications towards common data representations.

...now there's a list that covers a broad range of eHealth, including may of our current, collective headaches, doesn't it!

The main thrust of the question is one that doesn't get asked very often, as it is  orthogonal to our more common application- and messaging-driven approaches.  It focuses on the most important part of any eHealth activities, yet it remains largely ignored - the quality and re-use of health information. Liquid data. Shareable data.

My opinion is that we need a clinical knowledge repository of common and agreed data definitions - that much should be clear from my other posts.

What other alternatives do you think can provide a solution in this knowledge space? How will we fill these needs?