health informatics

Preserving health data integrity

How valuable do we really think health data is? How seriously do we take our responsibility to preserve the integrity of our health data? Probably not nearly as much as we should.

Consider the current situation of most clinicians or organisations when purchasing a clinical EHR system. What do they look for? Many possible answers are obvious, but there is one question that I suspect very few are asking. How many consider what data they will be able to export and convert to another format, preserving the current data integrity, at the end of the typical 5-10 year life span of the application? Am I wrong if I suggest it is not many at all?

Despite all the effort that we clinicians put into entering detailed data to create a quality health record we don't seem to often consider the " What next?" scenario. How much and precisely what data will we be able to safely extract, export, transfer or convert into the next, inevitable, clinical system? Ironically, we are simultaneously well aware that clinical systems have a limited technical life span.

Any and all of the health data in a health record is an incredibly valuable asset to the holder, to the patient (if these are not the same entities) and to those downstream with whom we may share it in the future - in terms of $$ invested; manpower resources used to capture, store, classify, update and maintain it; and not least the future value that comes from appropriate and safe clinical decisions being made upon the integrity of existing EHR data.

Yet we don't seem to consider it much... yet. However, as more clinicians are creating increasing amounts of isolated pockets of health data, we should be thinking about it very hard.

Every time we change systems we put our health data at risk - risk of absolute data loss and risk of possible corruption during the conversion. The integrity of health data cannot be guaranteed each time it is ported into a new system because current methods always require some kind of intervention - mapping, transformations, tweaking, 'cleaning', etc. Small errors can creep in with each data manipulation and which over time, can compromise the safety and value of our health data. On principle we know that the data should not be manipulated, but being limited by our traditional approach to siloed EHR applications, we have previously had little choice.

We need to change our approach and preserve the integrity of our health data at all costs. After all it is the only reason why we record any facts or activity in an electronic health record  - so we can use the data for direct patient care; share & exchange the data; aggregate and analyse the data, and use the data as the basis for clinical decision support.

We should not be focused on the application alone.

Apps will come and go, but we want our health data to persist - accurate and safe for clinical use - beyond the life span of any one clinical software application.

I've said this before, but it's worth saying many times over:

It's. all. about. the. data.

One of the key benefits of the openEHR paradigm is that the data specifications (the archetypes) are defined independently of any specific clinical system or application; are based on an open EHR architecture specification; and are publicly available in repositories such as the Clinical Knowledge Manager. It means that any data that is captured according to the archetype specification is directly usable by any and all archetype-compliant systems. Plus the data is no longer hard-wired into a proprietary application so that it is orders of magnitude easier to accurately share or transfer health data than it has before.

Clinical system vendors that don't directly embrace the archetype-technology may still 'archetype-aware', and can choose to use the archetype specifications as a means to understand the meaning of existing archetyped data and integrate it appropriately into their systems. Similarly they can map from their non-openEHR systems to the archetype specifications as a standardised method for data export and exchange.

The openEHR paradigm enables potential for archetype-compliant systems to share the same archetyped data repository - along the lines of an Apple platform 'plug & play' approach, with applications being added, removed or updated to suit the needs of the end-users, while the data persists intact. No more data conversions needed.

Adapted from Martin van der Meer, 2009

Now that's good news for our health data.

CIMI progress...

Just spreading the news... The Clinical Information Modelling Initiative met again recently and the minutes are now available from the early but rapidly evolving CIMI wiki site - http://www.cimiwiki.org

Intro from the latest CIMI minutes:

CIMI held its 5th group meeting in San Antonio from January 12 – 14, 2012. Over 35 people attended in person with an additional 5 participants attending via WebEx.

At this meeting, the group:

  • Established the criteria for membership and the process for adding members to the CIMI group
  • Authorised an interim executive committee
  • Determined a tentative schedule of meetings for 2012
  • Moved forward with the definition of the modeling framework
  • Formalized two task forces to begin the modeling work so that example models can be presented at the next meeting
  • Recognized the formation of a Glossary Group (lead to be announced)
  • Agreed to plans for utilizing existing tools to rapidly develop and test a candidate reference model and to create a small group of example CIMI models that build on the reference model work

Full Minutes are here

Are we there yet?

No, but we are definitely moving in the right direction... Conversations are happening that were uncommon generally, and downright rare in the US only 18 months ago. I've been rabbiting on for some time about the need for a 'universal health record - an application-independent core of shared and standardised health information into which a variety of 'enlightened' applications can 'plug & play'; thus breaking down the hold of the proprietary and 'not invented here' approach of proprietary clinical applications with which we battle most everywhere today.

So it was pleasing to see Margalit Gur-Arie's recent blog post on Arguments for a Universal Health Record. While I'm not convinced about the reality a single database (see my comments at the end of Margalit's post), I wholeheartedly endorse the principle of having a single approach to defining the data - this is a very powerful concept, and one that may well become a pivotal enabler to health IT innovation.

In addition, Kevin Coonan has started blogging in recent days - see his Summary of DCMs regarding principles of Detailed Clinical Models (aka DCMs). Now I know that Kevin's vision for an implementable HL7 DCM is totally different to the openEHR DCMs (=archetypes) that I work with. But we do agree on the basic principles about the basic attributes of these models that he has outlined in his blog post - it is quite a good summary, please read it.

Now these two bloggers are US-based - and this is significant because in the US there has been a huge emphasis on connecting between systems and exchange of document-based health information up until recent times. I view their postings as indicative of a growing trend toward the realisation that standardisation of clinical content is a necessary component for a successful health IT ecosystem in the (medium-longterm, sooner the better) future.

Note that "Detailed Clinical Models", is the current buzz phrase for any kind of model that might be standardised and shared but is also used very specifically for the HL7 DCMs currently in the midst of an interminable ballot process and the Australian national program's DCMs, which are actually openEHR archetypes being used as part of their initial specification process. "Detailed Clinical Models" is being used in many conversations rather blithely and with many not fully understanding the issues. On one hand it is positively raising awareness of our need to standardise content and on the other hand, it is confusing the issue as there are so many approaches. See my previous post about DCMs - clarifying the confusion.

It is worth flagging that there has been considerable (and I would also venture to say, rather premature) effort put in by a few to formalise principles for DCMs in the draft ISO13972 standard (Quality Requirements and Methodology for Detailed Clinical Models), currently out for ballot. My problem with this ISO work is that the DCM environment is relatively immature - there are many possible candidates with as many different approaches. It is also important to make clear that having multiple DCMs compliant with generic principles outlined in an ISO standard may mean that the quality of our published silos of "DCM made by formalism X" and "DCM made by formalism Y" models might be of higher quality, but it definitely will not solve our interoperability issues. For that you need a common reference model underpinning the models or, alternatively, a primary reference model with known and validated transformations between clinical model formalisms.

The more recent evolution of the CIMI group is really important in this current environment. It largely shares the principles that Kevin, openEHR and ISO13972 espouse - creation of standardised and shareable clinical content models, bound sensibly to terminology, as the basis for interoperability. These CIMI models will be computable and human readable; they will be based on a single Reference Model (yet to be finalised) and common data types (also yet to be finalised), and utilising the openEHR Archetype Definition Language (ADL) 1.5 as its initial formalism. Transformations of the resulting clinical models to other formalisms will be a priority to make sure that all systems can consume these models in the future. All will be managed in a governed repository and likely under the auspice of some kind of an executive group with expert teams providing practical oversight and management of models and model content.

Watch for news of the CIMI group. It has a influential initial core membership that embraces multiple national eHealth programs and standards bodies, plus all the key players with clinical modelling expertise - bringing all the heavy lifters in the clinical modelling environment into the same room and thrashing out a common approach to semantic interoperability. They met for 3 days recently prior to the HL7 meeting in San Antonio. The intent (and challenge) is to get all of this diverse group singing from the same hymn book! I believe they are about to launch a public website to allow for transparency which has not been easy in these earliest days. I will post it here as soon as it is available.

Maybe the planets are finally aligning...!

I have observed a significant change in the mind sets, conversations and expectations in this clinical modelling environment, over the past 5 years, and especially in the past 18 months. I am encouraged.

And my final 2c worth: in my view, the CIMI experience should inform the ISO DCM draft standard, rather than progressing the draft document based on largely academic assumptions about clinician engagement, repository requirements and model governance - there is so much we still need to learn before we lock it into a standard. I fear that we have put the cart before the horse.

To HIMSS12... or bust!

This blog, and hopefully some others following, will be about my thinking and considerations as I man an exhibition booth at the huge HIMSS12 conference for the first time next month… Well, we’ve committed. We’re bringing some of the key Ocean offerings all across the ocean to HIMSS12 in Las Vegas next month. If it was just another conference, I wouldn’t be writing about it. But this is a seriously daunting prospect for me. I’ve presented papers, organised workshops, and run conference booths in many places over the years – in Sarajevo, Göteborg, Stockholm, Capetown, Singapore, London, Brisbane, Sydney, Melbourne – but this is sooooooo different!

The equivalent conference here in Australia would gather 600-800 delegates, maybe 40-50 exhibition booths. Most European conferences seem to be a similar size, admittedly these are probably with a more academic emphasis, rather than such a strong commercial bent, which might explain some of the size difference. By comparison, last year’s HIMSS conference had 31,500 attendees and over 1000 exhibition booths – no incorrect zeros here - just mega huge!

I can’t even begin to imagine how one can accommodate so many people in one location. I have never even visited HIMSS before – we are relying heavily on second hand reports. You may start to understand my ‘deer in headlights’ sensation as we plan our first approach to the US market in this way.

Ocean's profile is much higher elsewhere internationally. Our activity in the London-based openEHR Foundation and our products/consulting skills have a reasonable profile in Australia and throughout much of Europe; and awareness is growing in Brazil as the first major region in South America. In many ways the US is the one of the last places for openEHR to make a significant impression – there are some pockets of understanding, but the limited uptake is clearly an orthogonal approach to the major commercial drivers in the US at present, however we are observing that this is slowly changing... hence our decision to run the gauntlet!

openEHR’s key objective is creation of a shareable, lifelong health record - the concept of an application-independent, multilingual, universal health record. The specification is founded upon the the notion of a health record as a collection of actual health information, in contrast to the common idea that a health record is an application-focused EHR or EMR. In the openEHR environment the emphasis is on the capture, storage, exchange and re-use of application-independent data based on shared definitions of clinical content – the archetypes and templates, bound to terminology. In openEHR we call them archetypes; in ISO, similar constructs are referred to as DCMs; and, most recently, there are the new models proposed by the CIMI initiative. It’s still all about the data!

So, we’re planning to showcase two products that have been designed and built to contribute to an openEHR-based health record - the Clinical Knowledge Manager (CKM), as the collective resource for the standardised clinical content, and OceanEHR, which provides the technical and medico-legal foundation for any openEHR-based health record – the EHR repository, health application platform and terminology services. In addition, we’ll be demonstrating Multiprac – an infection control system that uses the openEHR models and is built upon the OceanEHR foundation. So Multiprac is one of the first of a new generation of health record applications which share common clinical content.

This will be interesting experience as neither are probably the sort of product typical attendees will be looking for when visiting the HIMSS exhibition. So therein lies one of our major challenges – how to get in touch with the right market segment… on a budget!

We are seeking to engage with like-minded individuals or organisations who prioritise the health data itself and, in particular, those seeking to use shared and clinically verified definitions of data as a common means to:

  • record and exchange health information;
  • simplify aggregation of data and comparative analysis; and
  • support knowledge-based activities.

These will likely be national health IT programs; jurisdictions; research institutions; secondary users of data; EHR application developers; and of course the clinicians who would like to participate in the archetype development process.

So far I have in my arsenal:

  • The usual on-site marketing approach:
    • a booth - 13342
    • company and product-related material on the HIMSS Online Buyers Guide; and
    • marketing material – we have some plans for a simple flyer, with a mildly Australian flavour;
  • Leverage our website, of course;
  • Developing a Twitter plan for @oceaninfo specifically with activity in my @omowizard account to support it, and anticipating for some support from @openEHR – this will be a new strategy for me;
  • And I’m working on development of a vaguely ‘secret weapon’ – well, hopefully my idea will add a little ‘viral’ something to the mix.

So all in all, this will definitely be learning exercise of exponential proportions.

To those of you who have done this before, I’m very keen to receive any insight or advice at this point. What suggestions do you have to assist a small non-US based company with non-mainstream products make an impact at HIMSS?

Clinical Knowledge Repository requirements

I've been hearing quite a lot of discussion recently about Clinical Knowledge Repositories and governance. Everyone has different ideas - ranging from sharing models via a simple subversion folder through to a purpose-built application managing governance of combinations of versioned knowledge assets (information models, terminology reference sets, derived artefacts, supporting documentation etc) in various states of publication. It depends what you want to achieve, I guess. In openEHR it became clear very quickly that we need the latter in order to provide a central resource with governance of cohesive release sets of assets and packages suitable for organisations and vendors to implement.

In our experience it is relatively simple to develop a repository with asset provenance and user management. What is somewhat harder is when you add in processes of collaboration and validation for these knowledge assets - this requires development of review and editorial processes and, ideally, display transparency and accountability on behalf of those managing the knowledge artefacts.

The most difficult scenario reflects meeting the requirements for practical implementation, where governance of configurable groups of various assets is required. In openEHR we have identified the need for cohesive release sets of archetypes, templates and terminology reference sets. This can be very complicated when each of the artefacts are in various states of publication and multiple versions are in use in 'on the ground' implementations. Add to this the need for parallel iso-semantic and/or derived models, supporting documents, and derived outputs in various stages of publication and you can see how quickly chaos can take over.

So, what does the Clinical Knowledge Manager do?

  1. CKM is an online application based on a digital asset management system to ensure that the models are easily accessed and managed within a strong governance framework.
  2. Focus:
    1. Accessible resource - creation of a searchable library or repository of clinical knowledge assets - in practice, a ‘one stop shop’ for EHR clinical content
    2. Collaboration Portal - for community involvement, and to ensure clinical models that are ‘fit for clinical use’
    3. Maintenance and governance of all clinical knowledge and related resources
  3. Processes to ensure:
    1. Asset management
      1. uploading, display, and distribution/downloading of all assets
      2. collaborative review of primary* assets  to validate appropriateness for clinical use
        1. content
        2. translation
        3. terminology binding
      3. publication life cycle and versioning of primary assets
      4. primary asset provenance, differential and change log
      5. automatic generation of secondary**/derived assets or, alternatively, upload and versioning when auto generation is not possible
      6. upload of associated***/related assets
      7. development of versioned release sets of primary assets for distribution
      8. identify related assets
      9. quality assessment of primary assets
      10. primary asset comparison/differentials including compatibility with existing data
      11. threaded discussion forum
      12. flexible search functionality
      13. coordinate Editorial activity
      14. share notification of assets to others eg via email, twitter etc
    2. User management
    3. Technical management
    4. Reporting
      1. Assets
      2. Users
      3. Editorial activity support

In current openEHR CKM the assets, as classified above, are:

  1. *Primary assets:
    1. Archetypes
    2. Templates
    3. Terminology Reference Set
  2. **Secondary assets:
    1. Mindmaps
    2. XML transforms
    3. plus ability to add transforms to many other formalisms, including CDA
  3. ***Associated assets:
    1. Design documents
    2. References
    3. Implementation guides
    4. Sample data
    5. Operational templates
    6. plus ability to add others as identified

While CKM is currently openEHR-focused - management of the openEHR artefacts was the original reason for it's development - with some work the same repository management, collaboration/validation and governance principles and processes, identified above, could be applied for any knowledge asset, including all flavors of detailed clinical models and other clinical knowledge assets being developed by CIMI, or HL7 etc. Yes, CKM is a currently a proprietary product, but only because it was the only way to progress the work at the time - business models can always potentially be changed :)

It will be interesting to see how thinking progresses in the CIMI group, and others who are going down this path - such as the HL7 templates registry and the OHT proposed Heart project.

We can keep re-inventing the wheel, take the 'not invented here' point of view or we can explore models to collaborate and enhance work already done.

Why the buzz about CIMI?

With the recent public statement from the Clinical Information Modelling Initiative (CIMI) my cynical heart feels a little flutter of excitement. Maybe, just maybe, we are on the brink of a significant disruption in eHealth. Personally I have found that the concept of standardising clinical content to be compelling and hence my choice to become involved in development of archetypes. During my openEHR journey over the past 5 or so years it has been very interesting to watch the changing attitudes internationally - from curiosity and 'odd one out'  through to "well, maybe there's something in this after all".

And now we have the CIMI announcement...

So what has been achieved? What should we celebrate and why?

At worst, we have had a line drawn in the sand: a prominent group of thought leaders in the international health informatics domain have gathered and, through a somewhat feisty process, recognised that a collaborative approach to the development of a single logical clinical content representation (the CIMI core reference model) is a desirable basis for interoperability across formalisms. Despite most of the participants having significant investment and loyalty to their own current methodology and flavor of clinical models, they have cast aside the usual 'not invented here' shackle and identified a common approach to an initial modelling formalism from which other models will be derived or developed. Whether any common clinical content models are eventually built or not, naming of ADL 1.5 and the openEHR constraint model as the initial formalism is a significant recognition of the longstanding work of the openEHR Foundation team - the early specifications emerged nearly 20 years ago.

At its idealistic best, it potentially opens up a new chapter for health informatics, one that deviates from the relatively safe path of incremental innovation that we have followed for so many years - the reliance on messages/documents/hubs to enable us to exchange health information. There is an opportunity to take a divergent path, a potentially transformational innovation, where the focus is on the data itself, and the message/document/EHR becomes more simply just the receptacle or vehicle for the data. It could give us a very real opportunity to store lifelong health information; simplify data exchange (whether by messages or documents), aggregation, querying and analysis; and support knowledge-based activities such as decision support - all because we will (hopefully) have non-proprietary, common, agreed and fully defined models of clinical content and known transformations between each formalism.

Progress during the next few months will be telling. In January 2012, immediately before the next HL7 meeting in San Antonio, the group will gather again to discuss next steps.

There is a very real risk that despite best intentions all of this will fade away to nothing. The list of participating organisations, including high profile standards organisations and national eHealth programs, is a veritable Who's Who of international health IT royalty, so they will all come with their own (organisational and individual) work experience, existing modelling resources, hope, enthusiasm, cynicism, political agendas, bias and alliances. It could be enough to sink the work of this fledgling group.

But many are battle-weary, having been trudging down this eHealth path for a long time - some now gradually realising that the glacial incremental innovation is not delivering the long-term sustainable answers required for creating 21st Century EHRs as they had once hoped. So maybe this could be the trigger to make CIMI fly!

I think that CIMI is a very bright spark on the health IT horizon. Let's hope that with the right management and governance it can be agilely nurtured into a major positive force for change. And in the future, when its governance is mature and processes robust, we can integrate CIMI into the formal standards processes.

Best of luck, CIMI. We're watching!

CIMI - initial public statement

The following public statement has been released by the Clinical Information Modelling Initiative today:

Public release

The Clinical Information Modeling Initiative is an international collaboration that is dedicated to providing a common format for detailed specifications for the representation of health information content so that semantically interoperable information may be created and shared in health records, messages and documents. CIMI has been holding meetings in various locations around the world since July, 2011. All funding and resources for these meetings have been provided by the participants. At its most recent meeting in London, 29 November - 1 December 2011, the group agreed on the following principles and approach.

Principles

  1. CIMI specifications will be freely available to all. The initial use cases will focus on the requirements of organisations involved in providing, funding, monitoring or governing healthcare and to providers of healthcare IT and healthcare IT standards as well as to national eHealth programs, professional organisations, health providers and clinical system developers.
  2. CIMI is committed to making these specifications available in a number of formats, beginning with the Archetype Definition Language (ADL) from the openEHR Foundation (ISO 13606.2) and the Unified Modeling Language (UML) from the Object Management Group (OMG) with the intent that the users of these specifications can convert them into their local formats.
  3. CIMI is committed to transparency in its work product and process.

Approach

  • ADL 1.5 will be the initial formalism for representing clinical models in the repository.
    • CIMI will use the openEHR constraint model (Archetype Object Model:AOM).
    • Modifications will be required and will be delivered by CIMI members on a frequent basis.
  • A set of UML stereotypes, XMI specifications and transformations will be concurrently developed using UML 2.0 and OCL as the constraint language.
  • A Work Plan for how the AOM and target reference models will be maintained and updated will be developed and approved by the end of January 2012.
    •  Lessons learned from the development and implementation of the HL7 Clinical Statement Pattern and HL7 RIM as well as from the Entry models of 13606, openEHR and the SMART (Substitutable Medical Apps, Reusable Technologies) initiative will inform baseline inputs into this process.
  • A plan for establishing a repository to maintain these models will continue to be developed by the group at its meeting in January.

Representatives from the following organizations participated in the construction of this statement of principles and plan:

Further Information:

In the future CIMI will provide information publicly on the Internet. For immediate further information, contact Stan Huff (stan.huff@imail.org)

CIMI & beyond...

The Clinical Information Modelling Initiative (#CIMI) is currently meeting in London. It comprises a significant group of healthcare IT stakeholders and was formed some months ago as an initiative by Dr Stan Huff. After a number of face to face meetings and email list exchanges, the intent is that at the end of this 3 day meeting there will be an agreed decision on a common clinical content modelling formalism/methodology for our Electronic Health Records. For background, from Sam Heard’s email to the openEHR email list on November 2, 2011:

The main topic I want to address is the international initiative to develop a standardised clinical modelling methodology. This has some IHTSDO secretarial support and is led by Dr Stan Huff of Intermountain Healthcare, a former HL7 Chairperson and co-founder of LOINC, who has been advocating a model-based approach for many years. The current approach at Intermountain has been influenced by openEHR and uses a two-level modelling approach. Stan has established a leadership group through trust and reputation, which includes a variety of agencies who have been working in the area and national eHealth programs or major initiatives who are interested in consuming the models. It has grown out of an HL7 Fresh Look initiative and is currently known as the Clinical Information Modelling Initiative (CIMI).

The group has committed to determining a single formalism for clinical modelling and ADL and openEHR are on the list of alternatives which is as follows:

  • Archetype Object Model/ADL 1.5 openEHR
  • CEN/ISO 13606 AOM ADL 1.4
  • UML 2.x + OCL + healthcare extensions
  • OWL 2.0 + healthcare profiles and extensions
  • MIF 2 + tools HL7 RIM – static model designer

Proponents of the five different approaches have been presenting to members of the group, who have a variety of experience in these matters. Fourteen organisations will cast a vote on the formalism to use including openEHR, Singapore, UK NHS, Results 4 Care, HL7, Canada Infoway, 13606 Association, Tolven, CDISC, GE/Intermountain, US Departments, CDISC, SMArt and Mitre.

At the preliminary vote, held recently on November 20, the two most popular options were openEHR ADL 1.5 and UML.

Today CIMI will vote on a proposal for either ADL 1.5 or UML to be adopted as the initial common formalism for use, and determine a road map for coordinated development of semantically interoperable clinical models into the future. The potential impact of this is huge and exciting. It could be a disruptive change in health IT.

We hold our collective breath!

Australia's PCEHR Challenge

Are you planning to participate when the Personally Controlled Health Record goes live in July 2012?

I've certainly been pondering what would make me interested to try it, in the first instance, and then to keep using it in an ongoing manner – in my roles as both a Clinician and as a Consumer.

In my previous blog post we can see the PCEHR positioned part way between a fully independent Personal Health Record and the Clinician's Electronic Health Record – a hybrid product bridging both domains and requiring an approach that cleverly manages the shared responsibility and mixed governance model.

The scope of the proposed PCEHR is outlined in the Concept of Operations document. A recent presentation [PDF] given at the HIC2011conference provides a summary of expected functionality.

Recently we've all seen many analyses about the demise of Google Health and over the past decade we've been able to see the difficulties faced by many Personal Health Records who have come and gone in the past decade or even longer. What have we learned? What are the takeaway messages? How can we leverage this momentum in a way that is positive for both consumers and clinicians and potentially transform the delivery of healthcare?

Let's make some general assumptions:

  • Core functionality is likely to remain largely as described in the ConOps document;
  • Privacy, Security and Authorisation will be dealt with appropriately; and
  • Internet skills of users will be taken into account in terms of clever and intelligent design and workflow.

In a previous post, I suggested that there are additional 3 concepts that need serious consideration if we are to be successful in the development of person-centric health records, such as the PCEHR.

  • Health is personal
  • Health is social
  • Need for liquid data

I've been pondering further and have modified these slightly. I want to explore at how we can make health data personally valuable, socially-connected and dynamic or liquid.

Adding value; making it personal

Take a look at any clinician's EHR and the actual data in itself is pretty static and uninteresting – proprietary data structures, HL7v2 messages, CDA documents; facts, evidence, assessments, plans; a sequence of temporal entries which could prove to be overwhelming when gathered over a lifetime. However it provides the necessary foundation on which innovative approaches could make that static data come become dynamic, to be re-used, leveraged for other purposes, to flow!

Data starts to come to life when we use it creatively to add value or to personalise it, rather than just presenting the raw facts and figures or the interminable lists. The key is to identify the 'hot buttons' for any and all users, clinician or consumer alike – those things that provide some value or personalisation benefit that is not otherwise available.

For the clinician to be engaged, they need to be able identify at least symmetrical, and preferably increased, value from any effort that they contribute to participating in the creation and maintenance of a shared electronic health record (EHR). This might be recognised in many ways including, but not limited to:

  • Leveraging the data itself – tools to put the data to use
    • the creation of views of the data that will better support their provision of care to their patients, such as health summaries (both automatic and curated) and up-to-date Medication, Problem and Adverse Reaction lists;
    • access to clinical decision support;
    • improved safety from generation of alerts; and
    • Secure exchange of patient information.
  • Increased opportunity for engagement
    • online consultations;
    • care coordination; and
    • answering questions via email or a shared portal.
  • Generate increased income from
    • creation of reminders for follow-up visits
    • online interactions eg repeat prescriptions, eReferrals and online consulations
  • Data available for quality improvement and clinical audit
  • Data available for research & population health

Yet this is clearly not without it's unique challenges - for example, JAHIMA's The Problems with Problem Lists! What is being proposed is not trivial at all!

Consider what will engage a patient to take more than a second look at their health record. Google Health creator, Adam Bosworth, in the short excerpt from the TechCrunch video on why Google Health failed, stated that people don't just want some place to store their health information – that they want something more. And Ross Koppel noted in his Google gave up on electronic personal health records, but we shouldn't blog post that "…while knowledge may be power, it isn't willpower." The challenge we face is how to add 'the something more'; to encourage the development of willpower.

Segmenting consumers is one approach to identifying 'hot buttons' that might trigger them to opt in and use the PCEHR:

  1. Well & Healthy – this is a tricky group to engage as they often see no reason to address any aspects of their health and are not motivated to change behaviour or seek treatment. Some may be interested in preventive health or tracking fitness or diet.;
  2. Worried Well – those who consider themselves at risk of illness and want to ensure that they are (or will be) OK, often tracking aspects of their health to be alert for issues or problems – some call them hypochondriacs;
  3. Chronically Ill – those living with an ongoing health condition or disability; and
  4. Acutely Ill – including recent life-changing diagnosis or event

Each group will have specific needs that need to be explored to ensure that despite registering, there is a persuasive reason for them to return and actively engage with the PCEHR content or value-added features.

While the 'hot buttons' for consumers might vary, there are common principles that could underpin any data made available and value-added services provided, such that consumers can use, and make sense of, their health information. Examples include:

  • Minimal data entry by the consumer – this will be a considerable barrier to entry;
  • Aggregation of information from multiple sources - creating a hub of health information in one place;
  • Information is interpreted, wherever possible to make it personal and relevant to the consumer
    • ...this is what it means for you...
    • ...this is what you should consider next...;
  • Information provided in non-medical or non-technical language, using consumer vocabulary whenever possible;
  • Provision of context for the information – for example, information about the pathology test sitting alongside the actual pathology test result; and
  • Comparison of the consumer's data alongside data from equivalent peers, for example the same age and sex, where this is safe and appropriate.
  • Consideration of the 'Games for Health' movement and the effort to leverage 'gamification' in non-game applications as a mechanism for getting users (especially Consumers) to register, participate and most importantly, support their ongoing engagement.

Socially-connected data?

Hmmm. Traditionally we tend to gravitate to the idea of privacy at all costs when it comes to health information, however what we really aspire to is making the right information available to the right person at the right time – the safe and secure facilitation of health information communication. That this might extend into the non-clinical space in some circumstances is challenging and somewhat contentious, especially in these relatively early days, but it is conceivable that our traditional, rather paternalistic health paradigm is about to undergo a considerable shakeup.

The area of Telehealth is certainly starting to gain traction between healthcare providers as an electronic proxy for transporting patients long distances for specialist care. Similarly there appears to be rising interest in online consultations and interactions between consumers and their healthcare providers including the re-ordering of prescriptions, requesting of repeat referrals and ability to ask questions via secure messaging. Clearly there are issues and constraints about these activities, but it is very likely that the incidence of consumer-clinician interactions will increase in the near future as consumers recognise the convenience and demand rises.

A person-centric health record that allows shared access from both providers and consumers can act as the hub for these communications. In addition, integration with the Clinician's information system can support the ability for notification to the consumer of availability of diagnostic test results, reminders, tests due, prescriptions due etc.

The social support imperative in health programs such as Quit Smoking and Weight loss have been well recognised for many years, and is a common strategy used in clinical practice. In recent times this is expanding beyond family and friends to the broader community via automated sharing of measurements to social networks such as Twitter. In addition we can consider some early wins in the non-medical sphere – for example, the online PatientsLikeMe community – where consumers are engaging with other consumers with similar issues and concerns. There is also some interesting research evolving about the value that is coming from consumer to consumer health advice – Managing the Personal Side of Health: How Patient Expertise Differs from Expertise of Clinicians. Contagion Health has developed the Get Up and Move program where consumers issue challenges each other to encourage increased activity and to promote exercise, facilitated by an online website, and Twitter.

A recent Pew Research Centre report, Peer-to-peer Healthcare stated: "If you enable an environment in which people can share, they will!" And Adam Bosworth made a clear statement that one reason Google Health failed was because it 'wasn't social' – that it was effectively an isolated silo of a consumer's health information. I suspect that we will see this social and connected aspect of healthcare evolving and become more prominent in coming years, as we understand the risks, benefits and potential.

Liquid or Dynamic Data?

One of the most interesting comments about the demise of Google Health was: "Google Health confronted a tower of healthcare information Babel. If health information could have been shared, Google would still be offering that service." That's a very strong statement – not only that the actual data itself was a significant issue, and more specifically, that it was not shareable.

So that requires us to ask how we can make data shareable. "Gimme my damn data" and "Let the data flow", some say - but the result is usually access to their own data in various proprietary formats but no opportunity to bring it together into one cohesive whole which can be used and leveraged to support their healthcare. I've blogged extensively already about the benefits of standardised, computable clinical content patterns known as archetypes – see What on earth is openEHR and Glide Path to Interoperability. To me this approach is simply common sense, and with the rising interest in development of Detailed Clinical Models, perhaps this view is coming closer to reality.

Until we create a lingua franca of health information we can't expect to be able to share health information accurately and unambiguously. We need agreed common clinical content definitions to support sharing of health data such that it can flow, be re-used and be used for additional purposes such as sharing information between applications – making the data dynamic.

As we work out how to approach this shared personal electronic record space, our options will be severely limited unless we have a coherent approach to the data structure and data entry. We need to:

  • Develop common data formats and rules for use that will support the flow of health information
  • Promote automatic entry and integration of data from multiple source systems – primary care EHR, Laboratory systems, government resources and, increasingly, consumer PHRs. Most will tolerate some manual entry of data if they receive some value in return, but don't underestimate the barrier to entry for both clinicians and consumers if they have to enter the same data in multiple places.

To participate, or not?

The core content of the PCEHR as per the ConOps is very clinician-oriented at present, clearly intended for the ultimate benefit of the patient, but focused on information that will be useful to share where multiple clinicians are participating in care or for use in an emergency. The nominated Primary Care clinician will be tasked with the upload and maintenance of the Shared Health Summary. Event Summaries can be uploaded (hopefully this will be automated earlier rather than later). PBS, MBS, ACIR and Organ Donor information will be integrated from Medicare sources.

All of this data will no doubt be extremely useful in some circumstances, but in itself may not be compelling enough to encourage frequent use by clinicians, nor provide them with they symmetrical value such they will need to maintain the Health Summary records. From an academic point of view the proposed ConOps is ticking all the correct boxes, however the big question is will that be enough to motivate clinicians to participate and then, more importantly, to stay involved?

Consumer engagement as described in the ConOps is fairly limited – ability to manually enter 'key information' about allergies and adverse reactions or medications; the location of an advance care directive and notes about health information that they wish to share. This is hardly enough to capture the imagination of many consumers. It is a real likelihood that those who do opt in may only use it once or twice, including manually entering some of their data, but find that there is not compelling reasons for them to return to their PCEHR record nor to encourage/demand that their Clinician participate.

We have an opportunity to turn the PCEHR into a dynamic person-centred health information hub – one that can be leveraged by clinicians and consumers, and for the benefit of the consumer's health and wellbeing in the most wholistic sense. In order to do so, the PCEHR needs to seriously consider prioritising an approach that ensures dynamic, personalised and/or value-added data; health information that can be shared and communicated clinically and socially; and dynamic data.

At all costs we must avoid:

  • a dry, impersonal tool that is ignored or infrequently used by any user, or used by only one type, rather than both consumers and clinicians in partnership;
  • barriers to entry for both consumers and clinicians;
  • a closed, inward-looking tool;
  • a silo of isolated health information operating as yet another 'tower of healthcare information Babel'.

This is our challenge!

Anatomy of a Procedure

ACTION archetypes are one of the harder concepts to grasp in the openEHR clinical modeling world. ACTIONs appear to be promising the world - allowing for recording all activities from planning, scheduling, postponing, cancelling, suspending, through to carrying out and completing an activity. While they may appear cumbersome at first, they are surprisingly elegant and fit for purpose... once you can twist your brain around them.

Consider ACTION archetypes as a walrus - awkward on land, but graceful in the water - well maybe the analogy is being stretched a little, but you get my drift.

Whatever you do, don't underestimate the power of these little clinical models. ACTIONs are designed to enable sophisticated tracking of activities being carried out in a distributed environment - perfect for managing care pathways between multiple providers in disparate locations - not an easy task.

Take a look at this slide show - my attempt to provide a concrete example of how a Procedure ACTION archetype will support documentation about how an INSTRUCTION or order for a Procedure (any procedure, potentially) is carried out in a shared electronic health record environment.

  • 'Pathway steps' are identified as the steps (not necessarily sequential) or clinical activities in which it makes sense to record some information in the health record.
  • The Data Elements that are identified in the 'Description' include any and all information that we may wish to record for any of the Pathway steps. For example, the data we need to record when we plan to perform a Procedure or the information required to record that the Procedure was performed or abandoned, including the reason for abandonment.

[slideshare id=8563247&doc=201107instructionsactions-110711064154-phpapp02]

 

Updated: August 17, 2011

Archetype Quality II

In my previous post, I proposed a high level diagram representing the archetype development processes. Pondering on this process further, I am generally happy with the high-level steps identified although the implementation step should really outside the scope of determining quality criteria for the archetype itself; thus four identified processes remain:

  • Requirements gathering & analysis:
  • Design & build:
  • Collaboration & verification; &
  • Publication, maintenance & distribution.

The next logical steps are to:

  1. Identify all of the sub-processes involved including the full set of activities required to create, produce, maintain and distribute the final archetype, including the people, materials, tools and procedures;
  2. Identify all of the points within these sub-processes at which we can define quality criteria; and
  3. Identify the individual quality indicators with which we can measure or asses each of the quality criteria.

In addition, I'm increasingly of the opinion that we should utilise some of the ideas outlined by Kalra et al to assist us to be systematic in our identification of the quality criteria and indicators. Kalra et al proposed high level quality criteria (or statements that demonstrate ideal practice) in the following categories:

  • Business Requirements
  • Clinical Requirements
  • Technical Requirements
  • Information Governance Requirements
  • Repository Requirements

I've modified these criteria categories to:

  • Design requirements/methodology - identification of quality criteria related to the design of an archetype where it impacts on any or all of these four processes. Examples include: inclusive design (maximal data set for universal use case as the ideal, except where over-inclusiveness makes the model impractical, confusing or unsafe; minimisation of overlap with other models; minimisation of duplication of data elements in multiple archetypes; direct terminology bindings and use of terminology reference sets; precision and detail; granularity; include another archetype fragment; mandatory vs optional; metadata; authorship and references/evidence;
  • Business requirements - identification of quality criteria ensuring that the archetypes support the required activities for data storage, exchange, querying, comparative analysis, secondary use, and knowledge-based activities such as decision support for any or all of these four processes.
  • Stakeholder requirements - identification of quality criteria ensuring that stakeholder requirements are represented in the final archetypes. The term stakeholder is deliberately used here, rather than only referring to clinicians, as while the archetype methodology enables clinicians to actively participate they are not the only domain experts who should be contributing to the model. The quality criteria will possibly predominantly focused on the archetype content, but the archetypes will not only be used to support direct clinical care but also a range of other purposes. For example, to support distributed care, clinical decision support, data aggregation and reporting, comparative data analysis, population health planning etc. All potential stakeholders who will either use the data created by the archetypes or implement them should be included in the stakeholder category.
  • Technical requirements - identification of quality criteria ensuring that the archetypes are technically 'fit for purpose'. Examples include ensuring that they conform to the openEHR reference model specifications; represent existing standards specifications or data sets, as appropriate; and technical attributes such as data constraints, occurrence/cardinality and null flavors are represented accurately.
  • Information governance requirements (including repository activity) - identification of quality criteria to ensure strong governance of each archetype through it's life-cycle and maturity within a repository, and processes to support the publication and distribution of meaningful sets of archetypes. Examples include  archetype version management; identification of relationships, including dependence, to other archetypes or knowledge artifacts such as terminology subsets; currency of archetype; endorsement or certification by professional colleges or jurisdictions; implemented use in real systems; implementation guidance; identification of  gaps/overlaps of coverage in sets of archetypes; design documentation; sample data; transformations; and overall quality and safety assessment.

[Note: I have specifically excluded establishing quality criteria for the Clinical Knowledge Repository itself - this should be a separate process which determines governance policy and process for a range of clinical knowledge artifacts within a repository application - including archetypes, templates, terminology subsets, transformation to other semantically equivalent artifacts and inclusion of models from other modeling approaches.]

Each of these categories of requirements will need to be considered in each of the four processes of archetype development. A candidate framework for consideration might therefore be represented in the following table:

Design requirements/ methodology Business requirements Stakeholder requirements Technical requirements Information governance requirements
Requirements gathering & analysis  √  √  X
Design & Build  √  √  X
Collaboration & Verification  √  √   √   √   √
Publication, Maintenance & Distribution X  X   √   √    √

Quality criteria and the indicators that will be used to measure or assess them should be identified for each area in the table represented by a tick (√).

Some simple examples of criteria that can be applied to the 'Collaboration & Verification' process have been outlined in a previous post,. demonstrating practical indicators to measure and assess criteria related to the collaborative review process, evidence-basis and 'fit for purpose'.

Reference: Kalra D, Tapuria A, Freriks G, Mennerat F, Devlies J (2008). Management and maintenance policies for EHR interoperability resources [36 pages] (Q-REC Project IST 027370 3.3). The European Commission: Brussels. (Last accessed May 28, 2011))

Quality indicators & the wisdom of crowds

Harnessing the 'wisdom of the crowd' has potential to more powerful than traditional quality processes for a determining the quality of our clinical content in EHRs - we need to learn how best to tap into that wisdom...

Archetype quality I

Up until recently clinical content models, such as archetypes, have been regarded as a novelty; watched from the sidelines with interest from many but not regarded as mainstream. However now that they are increasingly being adopted by jurisdictions and used in real systems, modellers need to change their approach to include processes, methodologies and quality criteria that ensure that the models are robust, credible and fit for purpose. There has been some work done identifying quality criteria for clinical models but there is no doubt that establishing quality criteria for clinical content models is still very much in its infancy:

  • There has been some slowly progressing work in ISO TC 215 - ISO 13972 Health Informatics: Detailed Clinical Models. Recently it has been split into two separate components, not yet publicly available:
    • Part 1. Quality processes regarding detailed clinical model development, governance, publishing and maintenance; and
    • Part II - Quality attributes of detailed clinical models

Most of the work on quality of clinical models has been based largely on theory, with few groups having practical experience in developing and managing collections of clinical models, other than in local implementations.

In 2007, Ocean Informatics participated in a significant pilot project. The recommendations were published in the NHS CFH Pilot Study 2007 Summary Report. My own analysis, conducted in December 2007, revealed that there were 691 archetypes within the NHS repository. Of these, 570 were archetypes for unique clinical concepts, with the remainder reflecting multiple versions of the same concept. In fact, for 90 unique concepts there were 207 archetypes that needed rationalisation – most of these had only two versions however one archetype was represented in five versions! We needed better processes!

Towards the end of 2007 a small team within Ocean commenced building an online tool, the Clinical Knowledge Manager to:

  • function as a clinical knowledge repository for openEHR archetypes and templates and, later, terminology subsets;
  • manage the life-cycle of registered artefacts, especially the archetype content – from draft, through team review to published, deprecated and rejected. Also terminology binding and language translations;
  • governance of the artefacts.

In July 2008 we started uploading archetypes to the openEHR CKM, including many of the best from the NHS pilot project. Over the following months we added archetypes and templates; recruited users; and started archetype reviews. All activity was voluntary – both from reviewers and editors. Progress has thus been slower than we would have liked and somewhat episodic but provided early evidence that a transparent, crowd sourced verification of the archetypes was achievable.

In early 2010, Sweden's Clinical Knowledge Manager had its first archetypes uploaded.

In November 2010, a NEHTA instance of the CKM was launched, supporting Australia's development of Detailed Clinical Models for the national eHealth priorities. This is where most collaborative activity is occurring internationally at present.

In this context, I have pondered the issues around clinical knowledge governance now for a number of years, and gradually our team has developed considerable insight into clinical knowledge governance – the requirements, solutions and thorny issues. To be perfectly honest, the more we delve into knowledge governance, the more complicated we realise it to be – the challenge and the journey continues; a lot is yet to be solved :)

It is relatively easy to identify the high level processes in the development of clinical knowledge artifacts, each of which requires identification of quality criteria and measurable indicators to ensure that the final artifacts are fit for purpose and safe to use in our EHR systems. The process is similar for both archetypes and templates; plus the Requirements gathering and Analysis components are applicable to any single overarching project as well.

For archetypes:

The harder task is that for each of these steps, there are multiple quality criteria that need to be determined, and for each criterion it will be necessary to be able to assess and/or measure them through identifiable quality indicators.

Ideally a quality indicator is a measurement or fact about the clinical model. In some situations it will be necessary to include additional assessments manually performed by qualified experts.

If an indicator can be automatically derived from the Clinical Knowledge Manager (CKM), it ensures that up-to-date assessments of the models are instantly available as the models evolve (such as this Blood Pressure archetype example), and more importantly, without reliance on manual human intervention. However while assessments that do need to be assessed by an expert human – for example, compliance to existing specifications or standards – add valuable depth and richness to the overall quality assessment, they also add a vulnerability due to the need for skilled human resources to not only conduct the assessment, but to apply consistent methodologies during the assessment; these will be much more difficult to sustain.

Assessment of whether the indicators actually satisfy the quality criteria should also ideally be as objective as possible, however our reality is that it will probably more often be subjective and vary depending on the nature of the archetype concept itself. The process cannot be automated, nor can there be a single set of indicators or criteria that will determine the quality of every archetype. We need to ensure appropriate oversight to archetype development, ensuring that a quality process has actually been followed and utilise quality indicators to determine if the quality criteria have been met - on an archetype by archetype basis.

And it continues on...

Clinical modeling: academic vs practical

In a comment on the previous DCM post,  Gordon Tomes sent a link to an interesting 2009 academic paper by Scheuermann, Ceusters & Smith - Toward an Ontological Treatment of Disease and Diagnosis [PDF] There is a place for this 'pure', academic approach as a means to seek clarity in definitions, but a telling sentence in the paper for me is: "Thus we do not claim that ‘disease’ as here defined denotes what clinician in every case refer to when they use the term ‘disease’. Rather, our definitions are designed to make clear that such clinical use is often ambiguous."

This is totally right, but despite the apparent ambiguity this clinicians still manage :)

The approach for archetypes/DCMs is not to get tied up in these definitional knots unnecessarily but to concentrate on consensus building around the structure of the data. Use of ontologies and definitions are key elements in designing and modelling archetypes but the resulting models should not be based on academic theory but on existing clinical practice and processes. Our EHRs need to represent what clinicians actually need to do in order to provide care. Sometimes we need to be practical and pragmatic. For example, I once challenged a dissenter to build a Blood Pressure archetype based purely using SNOMED codes - the result was a nonsense, a model that he agreed was not useable by clinicians.

The approach to building the Problem/Diagnosis archetype has not been to try to differentiate these two terms pedantically. Many have tried to separate a 'Problem' from a 'Diagnosis' for years with little success - no point waiting for this to be resolved, it probably won't. And even if we did make a theoretical decision on a definition for each, the clinicians would still likely classify the way they always did, not the way the academics or standards-makers would like them to do it!

In the  collaborative CKM reviews for the NEHTA Problem/Diagnosis archetype we have been able to observe that clinicians and other stakeholders have achieved some consensus around the structured data required to represent the concepts of a problem plus the addition of some extra optional data elements to represent formal diagnoses. After completing only four review rounds, the discussion now is focused on finessing the metadata and descriptions, not on debating the structure of the model.

Clinicians may still go round in circles arguing about what constitutes an actual problem vs a diagnosis - for example, some may classify  heartburn as a problem, others as diagnosis. No matter. By using a single model to represent both we can ensure that no matter how heartburn may be labelled by any individual clinician, we can easily query and find the data in a health record, and in addition there is a single common model to be referenced by clinical decision support engines.

A small success, maybe.

openEHR: interoperability or systems?

Thomas Beale (CTO of Ocean Informatics and chair of the Architecture Review Board of the openEHR Foundation) posted these two paragraphs as part of the background for his recent Woland's Cat post - The Null Flavour debate - part I. It is an important statement that I don't want to get lost amongst other discussion, so I've reposted it here:

An initial comment I will make is that there is a notion that openEHR is ‘about defining systems’ whereas HL7 ‘is about interoperability’. This is incorrect. openEHR is primarily about solving the information interoperability problem in health, and it addresses all information, regardless of whether it is inside a system or in a message. (It does define some reference model semantics specific to the notion of ‘storing information’, mainly around versioning and auditing, but this has nothing to do with the main interoperability emphasis.)

To see that openEHR is about generalised interoperability, all that is needed is to consider a lab archetype such as Lipid studies in the Clinical Knowledge Manager. This archetype defines a possible structure of a Lipids lab test result, in terms of basic information model primitives (Entry, History, Cluster, Element etc). In the openEHR approach, we use this same model as the formal definition of this kind of information is in a message travelling ‘between systems’, in a database or on the screen within a ‘system’. This is one of the great benefits of openEHR: messages are not done differently from everything else. Neither is interoperability of data in messages between systems different from that of data between applications or other parts of a ‘system’.

Engaging Clinicians in Clinical Content (in Sarajevo)

Just browsing and found the link to our presentation for a paper the CKM team gave at the Medical Informatics Europe conference in Bosnia, 2009. Thought I'd share it here, in memory of an amazing conference and location:

MIE09: Engaging Clinicians in Clinical Content [PDF]

Our presentation:

[slideshare id=1958920&doc=engagingcliniciansinclinicalcontent-090906091020-phpapp02]

And the reference to Herding Cats is explained (a little) in the embedded video - actually an advertisement for EDS:

[youtube http://www.youtube.com/watch?v=Pk7yqlTMvp8&w=425&h=349]

I arrived after a 37 hour flight - Melbourne - London - Vienna - Sarajevo to join my Ocean CKM team - Ian McNicoll and Sebastian Garde. We presented our paper and also ran an openEHR workshop.

The conference was held at the Holiday Inn in Sarajevo - the hotel in which all the international journalists were holed up during the war, on 'Sniper Alley'.

Despite it being 14 years after the war had ended, raw emotions were palpable many times during the conference. That a pan-European conference was being held in his beloved Sarajevo under his auspice was a very overwhelming event for the Professor of Informatics and he was often seen in tears!

From my hotel room window in the centre of the city I could see 6 cemeteries.

Walking through the inner city we came across many cemeteries, thousands of marble gravestones with the majority representing the young men aged between 18 & 26.

Bullet holes were still very obvious on the walls of buildings.

Yet the city was vibrant, alive and welcoming.

I won't ever forget this visit.

Some photos of the experience:

Anatomy of a Problem... a Diagnosis...

Problem or Diagnosis? Does it really matter?

Representation of the concept of a Problem or Diagnosis is a cornerstone of an electronic health record. Yet to date it seems to have been harder than expected to achieve consensus. Why is this so?

Anatomy of an Adverse Reaction

Discussion about how to represent some of the commonest clinical concepts in an electronic health record or message has been raging for years. There has been no clear consensus. One of those tricky ones - so ubiquitous that everyone wants to have an opinion - is how to create a computable specification for an adverse reaction. In the past few years I have been involved in much research and many discussions with colleagues around the world about how to represent an adverse reaction in a detailed clinical model. I'd like to share some of these learnings...

Making sense of 'Severity'

'Severity' is a pretty simple clinical concept, isn't it? I thought so too until I first sat down to create a single, re-usable archetype to represent 'Severity'. I soon discovered that I had significantly underestimated it - the challenge was greater than appeared at first glance.

The archetype development process is usually relatively straight forward - a brain dump of all related information into a mind map, followed by rearranging the mind map until sensible patterns emerge. They usually do, but not this time.

However, the more I investigated and researched, the more I could see that clinicians express severity in a variety of ways, sometimes mixing and 'munging' various concepts together in such a way that humans can easily make sense of it, but it is much harder to represent in a way that is both clinically sensible and computable.

The simple trio of 'Mild', 'Moderate' and 'Severe' is commonly used as a gradation to express the severity of a condition, injury or adverse event. Occasionally these are defined for a given condition or injury, however most often these are subjective and there should be concern that one clinician's mild might be another's moderate. Some also include intermediate terms, such as 'Mild-to-moderate' and 'Moderate-to-severe', but one has to begin to be concerned if these more subtle differences make the judgement even more unreliable, especially if we need to exchange information between systems.

Others add 'Trivial' – is this less than 'Mild'? By how much? Is there a true gradation here? And what of 'Extreme'? Still others add 'Life-threatening' and 'Death' – but are these really expressing severity? Or are they expressing clinical impact or even an outcome?

When exploring severity I found examples of severity expressed by clinicians in many ways. This list is by no means exhaustive:

  • Pain – expressed as intensity, often including a visual analogue scale as a means to record it ie rate from 0-10
  • Burns – describing percentage of body involved e.g. >80% burns
  • Burns – describing the depth of burn by degree e.g. first degree
  • Perineal tears – describe the extent and damage by degree – e.g. second degree tear
  • Facial tic – expressed in using frequency e.g. occasional through to unrelenting
  • Cancer – expressed as a grade, clinical or pathological
  • Rash – extent or percentage of a limb or torso covered.
  • Minor or major
  • Mild, disabling life-threatening
  • Relative severity – better, worse etc
  • Functional impact – ordinary activity, slight limitation, marked limitation, inability to perform any physical activity
  • Short or long term persistence
  • Acute or chronic

In practice, clinicians can work interchangeably with any of these expressions and make reasonable clinical sense of it. The challenge when creating archetypes, and other computable models, is to ensure that clinicians can express what they need in a health record, exchange that information safely with other clinicians, allow for knowledge-based activities such as decision support.

In addition, there also needs to be a clear distinction between 'severity' and other, related qualifiers that provide additional context about 'seriousness' of the condition, injury or adverse reaction. These sometimes get thrown into the mix as well, including:

  • Clinical impact (or significance);
  • Clinical treatment required; and
  • Outcomes.

Clinical impact/significance might include:

  • None - No clinical effect observed
  • Insignificant - Little noticeable clinical effect observed.
  • Significant - Obvious clinical effect observed
  • Life-threatening - Life-threatening effect observed
  • Death – Individual died.

Immediate clinical treatment required might include:

  • No treatment
  • Required clinician consultation
  • Required hospitalisation
  • Required Intensive Care Unit

Outcomes might include:

  • Recovered/resolved
  • Recovering/resolving
  • Not recovered/resolved
  • Fatal

Then there are those that recovered or the condition resolved but there were other sequelae such as congenital anomalies in an unborn child...

Severity ain't simple.

Over time, and repeatedly returning to it when modelling it in various archetypes including Adverse Reaction and Problem/Diagnosis, I've come to the conclusion that I don't think it is useful or helpful for us to model 'severity' in a single, re-usable archetype. It seems to work better as a single qualifier element within archetypes – then we can bind it in to terminology value sets that are useful for that specific archetype concept and for the use-case context.

When a clinical knowledge pattern is easily identified, creating an archetype is easy - the archetype almost writes itself! So over time I've learned not to try to force the modelling. Some archetypes 'work'; some, like this one, just don't.

Is an incremental approach to EHRs enough?

Attending the International Council meetings on the first day of the recent Sydney HL7 meeting, the resounding theme was that of many countries focussed on messages, documents and terminology as the solution for our health IT future; our EHRs; health interoperability. 19 countries presented and 18 repeated this theme. The nineteenth HL7 affiliate, Netherlands, stood out in splendid isolation by declaring an additional focus on the development of clinical content.

Incremental EHRs?

This common message/document/terminology approach to EHRs is largely for historical reasons - safe, solid and incremental innovation; building on what has been proven successful before. It is not unique to health IT. It is not unique to HL7 either - it was just very obvious on the day.

In fact you can see this same phenomena in many places in everyday life - the classic example being the manufacturing of disposable shavers. First a one blade razor, then two blades, then three... But how long can this go on for? Is seven blades a reasonable thing? Eight? Will infinitely more blades make a difference to the shaving outcome or experience? When will blade makes stop and rethink their approach to innovation?

Certainly messages, documents and terminology are approaches that have commenced as often relatively isolated islands of work, had some successes, progressed and are now gradually being drawn together to create opportunities for some interoperability of health information. And don't get me wrong, there are absolutely some successes occurring. However, three questions linger in my mind about this approach:

  1. Is it enough?
  2. Is it sustainable?
  3. Will it achieve interoperable systems?

We all hear the rhetoric that if we can interconnect financial systems, then we can do it for health. We've tried this incremental approach for over 30 years and made significant progress but we definitely haven't cracked it yet.

Messages

Paul Roemer used this diagram (right) to illustrate his view of the messaging exchanges in his blog post. I think the image is powerful, imparting both the complexity and chaos that an n-to-n messaging dependency will likely entail. The HIE approach might help, but ultimately a national approach to messaging will result in multiple images like this trying to connect to each other.

This is further complicated by messages often taking up to a year to negotiate between the sender and receiver, or potentially longer to successfully negotiate the national or international standards processes, and even then the resulting 'standard' is often being subjected to individual tweaks and updates at implementation in real-world systems where the 'one-size fits all' message doesn't meet the end-user's requirements.

In Australia the 'standard' pathology HL7 v2.x message is anything but standard with each clinical system vendor having to support multiple slightly different versions of the 'standard'. The big question is whether this is sustainable into the future.

In the US, the clinical content payload for the Direct project is yet to be determined and defined - "The project focuses on the technical standards and services necessary to securely transport content from point A to point B, and does not specify the actual content exchanged." This rings alarm bells for me.

I've been told that at one relatively large hospital system in the US, they require 30 permanent staff just to maintain their messages...! Whoa!!!

Messages and message networks try to be a simple incremental solution to interoperability. But are they really that simple? Are they sustainable? Locally, yes. Regionally, yes probably. Nationally? Internationally???

<My head hurts>

Documents

CDA documents featured highly on the HL7 agenda, with the current emphasis on simple documents, and increasingly we will welcome the inclusion of structured clinical content detail. Currently the content is HL7 v3 templates, but how will that content be standardised and support semantic interoperability?

Werner Ceusters describes semantic interoperability as:

Two information systems are semantically interoperable if and only if each can carry out the tasks for which it was designed using data and information taken from the other as seamlessly as using its own data and information.

Without working towards standardising clinical content we run the risk of our EHRs being little more than a filing cabinet full of unstructured documents that are human readable but not computer processable.

Terminology

We all need terminology, no matter which approach. That is an absolute given.

However the stand-out question that impacts all approaches is how to best 'harness' the terminology...

Interestingly the openEHR Board and the IHTSDO Management Board (SNOMED CT) have been talking about joining forces. An announcement on the openEHR site, in December 2010, states:

"At its October meeting in Toronto, the General Assembly of the IHTSDO received and discussed a proposal, submitted by its Management Board, to support, develop and maintain the IP in openEHR, within a broader framework of IHTSDO governance for clinical content of the electronic health record.

The openEHR Foundation Board has now heard from the IHTSDO Management Board, saying that, whilst the objective of the proposal was considered by the GA to be within the scope of the organisation and that it represented a pressing issue for their governments, it was unable to reach consensus that going forward with openEHR in this way is the right choice, at this time."

This suggests that the IHTSDO Management Board identified significant value in combining the structure of openEHR archetypes with SNOMED, enough to propose a stronger relationship. It will be interesting to see if these discussions continue to progress.

My view - this would be a game-changer. A common approach to using archetypes and SNOMED together is a potentially very powerful semantic combination.

An orthogonal approach - knowledge-driven EHRs

By contrast, the orthogonal approach taken by ISO 13606 & openEHR starts with computable, standardised clinical data definitions at the core - representing the clinical content within an electronic health record. The data definitions, comprising archetypes  plus terminology, are the key. Messages and documents are derived from these content specifications and will therefore be internally consistent with the EHR data from which they were generated and if received into an archetype-enabled system can be incorporated, as described by Werner's definition above, as though generated in the receiving system ie semantic interoperability - no data transformation required.  The openEHR communities of clinicians, informaticians and engineers are collaborating to agree and standardise these archetypes as a common starting point.

Working from standard content definitions will potentially make the development of documents and messages orders of magnitude simpler. If archetypes have been agreed and published via the CKM collaborative process, then engineers will be able to utilise these as building blocks for the creation of messages and documents for specific use cases, and for a multitude of other technical outputs.

The way forward?

Returning to the multi-blade EHR idea...

When will we stop, regroup and assess the merit of continuing as we are? Who will draw a line in the sand?

Difficult to answer.

Maybe never, maybe no-one.

openEHR/ISO 13606 may not be the right or final answer, but it does provide an alternative and orthogonal approach that has merit and is worth consideration.

Hopefully some of outcomes/proposed discussions from the recent HL7 meeting in Sydney will also contribute to clarifying a way forward.

Perhaps, even yet, we will devise a truly innovative approach to solving the difficulties of developing EHRs.