Everyone is talking big data, but...
Digital health needs "little data"
Interoperability has been the 'holy grail' of healthcare for more than 3 decades. It is fiendishly complicated. We hear a lot of buzzwords and hype in the eHealth environment. Blockchain is the emerging 'cool kid on the block', FHIR is burning brightly, CDA documents not so much, APIs...
Technological advances are constantly evolving. Yet the practical consequences are huge when we have to migrate our invaluable health data to the next technology platform, application, operating system. We need a means to ensure that the data is preserved accurately and in context, and immune to the cycles, fads and fashions of technology.
While the solution to health data interoperability is admittedly multifactorial, the largest elephant in the room is the "little data". In the world of standards we hear almost nothing about a strategy to manage the atomic health data.
We need a strategy for atomic data that:
- is independent of any vendor, clinical system or technology paradigm
- allows us to capture and retrieve health data in a consistent way to facilitate direct provision of healthcare, data exchange, aggregation and analysis for research, population health and reporting;
- can underpin sharing of a complete electronic health record, or data extracts between clinical systems that are agnostic of the message or document wrapper;
- supports re-use data across the life-long health record, data extracts and summaries, research, population health and PHRs;
- underpins critical clinical decisions in personalised and precision medicine;
- enables comparing 'apples with apples' in data aggregation for accurate analysis; and
- drives knowledge-enabled activities such as data queries and clinical decision support.
Messages, documents, APIs ,"big data" - none by themselves will solve these issues.
We need to work smarter
We need to change our focus on short term wins via incremental innovation. In order for our eHealth environment to thrive and interoperability to become real, we need to start thinking orthogonally, in stark contrast to the way we have operated for the past 3 decades.
We need:
- open data platforms and health data, based on open standards and independent of any single software vendor or data silo;
- health information models that are designed:
- for lifelong use;
- to be shared and re-usable across all clinical scenarios and use cases;
- to be combined optimally and coherently with medical terminologies to express and define our health data requirements unambiguously;
- clinicians, domain experts, terminologists, researchers and software engineers to assert that each information model and value set is fit for use, including in combination with each other; and
- strong knowledge governance and tooling to manage the ecosystem of health data assets.
To achieve any reasonable amount of real health data interoperability we need a rational and coordinated approach to the development of application agnostic information models, such as openEHR archetypes or equivalents, that strategically combine with authoritative health terminologies and classifications, such as SNOMED CT, LOINC or local term sets.
'Beautiful' health data is...
open
free
reusable
unambiguous
lifelong
coordinated & collaborative
agreed by consensus
strongly governed
The openEHR methodology has been specifically designed for this purpose.
The openEHR approach
Requirements gathering
- Collaborative approach to requirements gathering by clinician informaticians, bridging between the clinicians/domain experts and technicians.
Information model development
- Creation of openEHR archetypes to describe a maximal data set for a single clinical concept and universal use case.
- Creation of data sets as openEHR templates, aggregating multiple archetypes to represent a message or document, further constrained to reveal only relevant data points for a specific clinical scenario.
- Terminologies such as SNOMED CT, LOINC or local term sets incorporated into archetypes when the codes are ubiquitous, and into templates when the codes are use-case specific or represent local needs
CLinical knowledge manager & Community
- Public library of information models
- Collaboration portal to enable web 2.0 participation
- verify that the archetypes and templates are fit for use in our eHealth systems by consensus. Inclusive of all relevant professions, specialities, health domains, consumers, academics and researchers, geographical location etc.
- Nearly 2000 registered from 89 countries
- Robust knowledge governance processes for artefact management:
- versioning, provenance and audit trails
- publication lifecycle
- distribution
- Transparency and accountability
To find out more, contact Atomica.