Health information model: Difference between revisions

From Endeavour Knowledge Base
(15 intermediate revisions by the same user not shown)
Line 3: Line 3:
The article does not include the content of any particular model.  
The article does not include the content of any particular model.  


== What is the Discovery health information model? ==
== What is the health information model (IM) and what is its purpose? ==
It is a representation of the meaning and structure of data held in the electronic health records that have published data to the Discovery data service.
The IM is a representation of the meaning and structure of data held in the electronic records of the health and social care sector, together with libraries of query, value sets, concept sets, data set definitions and mappings.


It consists of an ontology of ontologies, a common basic data model, a vocabulary of terms used in the ontology and data model, a set of maps between terms and codes, and a library of various sets for use in query or by the data models.
The main purpose is to bridge the chasm that exists between highly technical digital representations and plain language so that when questions are asked of data, a lay person could use plain language without prior knowledge of the underlying models.


It is designed to help systems and informaticians make sense of the chaos of data from different systems. The model contents are accessed either by APIs or via an open source application : Information model manager.
It is a computable abstract logical model, not a physical structure or schema. "computable" means that operational software operates directly from the model artefacts, as opposed to using the model for illustration purposes. As a logical model it models data that may be physically held any a variety of different types of data stores, including relational or graph data stores. Because the model is independent of the physical schemas, the model itself has to be interoperable and without any proprietary lock in.


In order for machines to understand the content and structure, and to enable interoperability both within healthcare and with other sectors, It uses a set of international standard languages that form the languages of the semantic web.
The IM is a broad model that integrates a set of different approaches to modelling using a common ontology. The components of the model are:


It is not a single information model in the conventional sense. Nor is it a single data model. Whilst there is a single data model that encompasses the data, the expectation is that there is a need for as many data models as there are business requirements.
# A set of ontologies, which is a vocabulary and definitions of the concepts used in healthcare, or more simply put, a vocabulary of health. The ontologies is made up of the world's leading ontology Snomed-CT, with a London extensions, various code based taxonomies (e.g. ICD10, Read, supplier codes and local codes)
# A common data model, which is a set of classes and properties, using the vocabulary, that represent the data and relationships as published by  live systems that have published data, Note that  this data model is NOT a standard model but a collated set of entities and relationships bound to the concepts based on real data, that are mapped to a common model.
# A library of business specific concept value sets, (aka reference sets) which are  expression constraints on the ontology for the purpose of query
# A catalogue of reference data such as geographical areas, organisations and people derived and updated from public resources.
# A library of Data set (query) definitions  for querying and extracting instance data from the information model, reference data, or health records.
# A set of maps creating mappings between published concepts and the core ontology as well as structural mappings between submitted data and the data model.
# An open source set of utilities that can be used to browse, search, or maintain the model.


However, because the model uses a common vocabulary semantically defined using ontological techniques, computers that understand the vocabulary can interoperate even when using different data models. This is the basis for the semantic web. Unlike human language, machine based languages must use logical constructs and once the means of using logic is understood then the computers can use the logic to process the data.
<br />
 
The model does not own its ontology.  Instead it absorbs the best ontologies and supplements with additional content not yet defined. In using the London extension of Snomed-CT, it can generate new concepts and expressions tat can be shared across the NHS. Only those concepts that are necessary are created in order to prevent loss of detail.
 
The remainder of this article considers how models and ontologies can be constructed using this approach.
 
== Visualisation ==
[[File:Graph.jpg|thumb|Types of data as a graph]]
 
The data in a health record stored can be visualised as a set of relationships between one thing and many others.
 
Some people call this a graph. Others call these objects , properties and values. From a grammatical language perspective they are subjects, predicates and objects.
 
The example on the right is entirely arbitrary but illustrates a problem. What does "condition record" mean, or indeed what is a "condition"? Why is a patient linked to a person and what does "linked to" mean? 
 
The answer is that the "terms" or "concepts" used in a model should be derived from a vocabulary whose terms have meaning and are formally defined. Some terms have meaning in whatever context they are used whereas others have different meanings in different contexts. In defining terms, it is necessary to defined them precisely enough for a computer to interpret the meaning safely i.e. the context of an idea is part of the idea itself. 
 
The most difficult challenge is to agree the definition and meaning of the concepts in the context they are used.  The agreement as to a particular model is less important. A definition defines a concept in relation to other concepts. Within a domain of interest such as healthcare, all concepts are indirectly related in some way to all other concepts in that domain. 
 
Luckily, standards have evolved to enable machine readable definitions.
 
The crucial step in the discovery approach is to apply this principle to both the things that are being recorded (such as clinical concepts), as well as the structure of entries in records themselves.
 
== Semantic Web ==
 
The semantic web approach is adopted. In this approach, data can be described via the use of a plain language grammar consisting of a subject, a predicate, and an object. A triple. The theory is that all health data can be described  in this way (with predicates being extended to include functions).
 
The consequence of this approach is that W3C web standards can be used such as the use of [[wikipedia:Resource_Description_Framework|Resource Descriptor Framework o]]<nowiki/>r RDF. This sees the world as a set of triples (subject/ predicate/ object) with some things named and somethings anonymous. Systems that adopt this approach can exchange data in a way that the semantics can be preserved. Whilst RDF is an incredibly arcane language at a machine level, the things it can describe can be very intuitive when represented visually.In other words the Information modelling approach involves an RDF Graph. However RDF has no inherent semantics or schematics. To bring those in the model uses RDFS, OWL2 DL and SHACL as its main languages. It incorporates ontologies such as Snomed-CT and W3C-PROV.
== Concepts, versus codes==
See ''main article [[Term based vs code based concepts]]'' which considers the different philosophies and the relationships (mappings) between them.
 
== Data sets and schemas ==
Having a grammar and a vocabulary represented in RDF and OWL is not enough. To model things for specific purposes it is necessary to describe precise structures. These may be referred to as data sets or schemas, or more commonly, data models.
 
A data set takes rather vague general statements and arranges concepts in precise manner. This aligns precisely with the semantic web language [https://www.w3.org/TR/shacl/ Shape constraint language] (SHACL). This differs from OWL in that OWL constructs assume a partial view of the world with everything not described being uncertain, whereas SHACL states how something should be.
 
To support both machine readability and standard based interoperability, Discovery adopts the necessary elements of SHACL in addition to RDF/ RDFS and OWL
 
== Interrogating data for information. ==
Having established a representation of data using a set of grammars (RDF/RDFS/OWL/SHACL) it is necessary to represent a means by which the data could be interrogated to produce useful information.
 
Once again the semantic web community has established a machine readable common grammar for query known as [https://www.w3.org/TR/sparql11-query/ SPARQL.] SPARQL is designed to ask questions of RDF and is thus an ideal way of representing query logic.
 
== Dialects and alternative languages ==
It is all very well supporting semantic web standards, but the world often adopts alternative approaches.
 
To that end, Discovery modelling tends to support grammars and syntaxes that are in common use.
 
Examples include [https://confluence.ihtsdotools.org/display/DOCECL/Expression+Constraint+Language+-+Specification+and+Guide Expression constraint language (ECL)] which is a way of expressing entailment queries of complex class expressions. Another includes [https://www.w3.org/TR/sparql11-query/ SPARQL] which can logically model queries that can be mapped to database specific SQL.
 
== Information model APIs and languages ==
For an information model to be useable, it has to be accessible in some way. The means of accessing an information model is via the use of a language i.e. an [[Health Information modelling language - overview|information modelling language]] and this is described in a separate article. The language assumes a graph representation of the model and uses RDF concepts as its basis.
[[File:IM logical object model.png|thumb|IM Service architecture]]
 
For an information model to be useful, it has to have at least one [[information model service]], i.e. an operational service that provides access to one or more information models. A service must provide a set of APIs as well as provide instances of the model for implementations to use directly should they wish to.


The diagram on the right shows a tiered architecture for such a service. Information model APIs are described in a separate article.
== Model building blocks and visualisation ==
The model consists of classes, sets and objects that are instances of classes. 
[[File:Ethnicity.jpg|thumb|Ethnicity]]
Objects can act as objects in their own rights (e.g. an instance of chest pain) or may also act as classes (e.g. the class of objects that are chest pain). Likewise sets have members that are objects and the objects may also act as classes or sets. For example a set for the 2011 Ethnicity census will contain a member object of "British" which is also a set with members such as English and so on.


All implementation code including the evolving service, APIs, language grammars and object models are also available on Github in the following repositories:
The model itself is stored as an RDF based  knowledge graph, which means it is implementable in any mainstream Graph database technology. There are no vendor specific extensions to RDF.


https://github.com/endeavourhealth-discovery/IMAPI
In line with the RDF standard,  all  persistent types, classes, , property identifiers and object value identifiers are uniquely named using international resource identifiers. In most cases the identifiers are externally provided (e.g. Snomed-CT identifiers) whilst in others that have been created for a particular model. Organisations that author elements of the models use their own identifiers.


Utilities that use it and transform between syntaxes are at:
From a data modelling perspective the arrangements of types may be referred to as archetypes, which are conceptually similar to FHIR profiles. In the semantic web world they would be considered "shapes". There are an unlimited number of these which frees the model from any particular conventional relational database schema. Inheritance of types is supported which enables broad classifications of types and re-usability.


https://github.com/endeavourhealth-discovery/InformationManager
The variation between the parts of the model that model terminology concepts and those that model data use slightly different grammars in keeping with their different purposes. The information model language describes the differences.  


A viewer of the information model is at:
The models can be viewed in their raw technical form (in JSON or Turtle) or can be viewed by the information model viewer at the online tool [https://im.endeavourhealth.net/#/ Information model directory] 


https://github.com/endeavourhealth-discovery/IMViewer
== Information model language ==


== Information model purposes and functions ==
''Main article'' [[Health Information modelling language - overview|information modelling language]] describes the language in more detail.
The information models have 4 core functional requirements internal to a model: '''Description of the model , validation of model content, population of the model, and query of the model.''' In support of query there is also the need to support  '''inference''' which generates new insights that were not necessarily authored.


In addition the information model must support the same 4 core functional requirements on actual health data that is modelled.
The semantic web approach is adopted for the purposes of identifiers and grammar. In this approach, data can be described via the use of a plain language grammar consisting of a subject, a predicate, and an object;  A triple, with an additional context referred to as a graph or RDF data set. The theory is that all health data can be described  in this way (with predicates being extended to include functions).


Systems that use the models can use any or all of three approaches:
However, the semantic web languages are highly complex and a set of more pragmatic approaches are taken for the more specialised structures.


# Direct use of the model data content as a database (or set of files that can populate a database via  script)
The consequence of this approach is that W3C web standards can be used such as the use of [[wikipedia:Resource_Description_Framework|Resource Descriptor Framework o]]<nowiki/>r RDF. This sees the world as a set of triples (subject/ predicate/ object) with some things named and somethings anonymous. Systems that adopt this approach can exchange data in a way that the semantics can be preserved. Whilst RDF is an incredibly arcane language at a machine level, the things it can describe can be very intuitive when represented visually. In other words the Information modelling approach involves an RDF Graph.
# Use via a set of APIs (both local and remote) designed to provide access to the data within the model, or to trigger outputs of the model for 1)
#Use of the information model technologies themselves via the use of the published open source code


The main functional purposes of an information model is further described:
In addition to semantic web languages, other commonly used languages are in place are used to enable the model to be accessed by more people.. For example the Snomed-CT expression constraint language is a common way of defining concept sets. ECL is logically equivalent to a closed world query on an open world OWL ontology. The IM language uses the semantic language of SPARQL together with entailment to model ECL but ECL can be exported or used as input as an alternative.
 
*'''Description of the model.''' There is little point in having a model unless it can be described and understood. Knowing what is in a model is a pre-requisite to using it. For  example, there is no point in trying to find out if a patient record indicates whether or not they have diabetes if the model doesn't include the ability to record it. In order to understand a model, two techniques are required: diagrammatic representation and human readable text representation. A model must support both.
 
*'''Data Validation''' is essential for consistent business operations. Data models, user input forms, and data set specifications are designed to enable data collections to be validated. Maintaining a standard for data collection is essential. For example, if you have a patient record in front of you, you will likely need to know their approximate age. To work this out  date of birth must be recorded. Validating that the date of birth can be and has been recorded is important. However,  if ''more than one'' date of birth was recorded for the same patient, it would be less valuable. Thus a modelling language must include the ability to '''constrain''' data models to suit particular business needs as part of validation, even when the data model shows more than one.
 
*'''Population of the model.'''  It is impractical to build model content from scratch and likewise virtually impossible to populate instances with existing data without some manipulation. An information model must contain the ability to model mappings between currently held data and model conformant data.
 
*'''Enquiry (or query''') is necessary to generate information from data. There is little point in recording data unless it can be interrogated and the results of the interrogation acted upon. Thus a modelling language must include the ability to query the data as defined or described, including the use of inference rules to find data that was recorded in one context for use in another.
 
*'''Inference''' is pivotal to decision making. For example, if you are about to prescribe a drug containing methicillin to a patient, and the patient has previously stated that they are allergic to penicillin, it is reasonable to infer that if they take the drug, an allergic reaction might ensue, and thus another drug is prescribed. Thus a modelling language must include the ability to infer things and classify things for safe decisions to be made.


<br />
<br />
== Example of model content basic assumptions ==
In constructing a model of health data, it is necessary to have an agreement as to the sort of things that a model will contain and how they will be categorised.
It is fair to say that there will probable never be a universally accepted approach to this problem, but nevertheless, any information model needs to at least put a few markers down.
Healthcare modelling approaches such as hl7 and openEHR have each made some basic assumptions as to their respective starting categorisations. They are however incompatible and as a result, transfer of information between systems using the different approaches has proved expensive. The fall back position has been to continue with whatever model a particular system has and progress is delayed.
A safe starting point is to consider some categorical terms that are unlikely to be controversial and would be consistent with the open standards in place. For the sake of making a start, the following categorisations are proposed: '''Event, Entry, Provenance, ontology, types, state, query'''
* Everything that is recorded starts with an '''''event.''''' In this context an event is a machine level event that signals a change of state or a desire to change a state. The event is usually associated with a description of what the event is and some data associated with the event. The data associated with the event normally includes the intention, such as a desire to add/amend or delete data in a record, as well as the data which was recorded as part of the event.
* The net result of an event is the creation/update/deletion of, an '''''Entry''''' in a health record. The term ‘Entry’ is used in its intuitive meaning here. If one were to look at a record it would consist of entries, not events.
* Because an entry is generated from one or more events, an entry has '''''provenance'''''. Provenance enables the audit and validation of an entry, including all events that led to the state of the current entry. A subset of an entries provenance is the “audit trail”, which is pivotal for medico legal purposes.
* An entry in a record has a number of attributes which describe the entry. For an information model to succeed there must be an agreement as to what these attributes mean. This is achieved by the use of a shared '''''Ontology'''''. An ontology precisely defines the meaning of an attribute, and the type of values that an attribute might have. This means that ANY data can be exchanged as long as an entry uses attributes from the agreed ontology.
* Agreement on the definition of concepts is not enough. Agreement on '''''context''''' is also important. Most would agree that a date of birth is the date a person on was born. But what about an entry in a record for Diabetes? Does it mean the person has the condition or does it mean the clinician is considering the condition? Context is provided by the ontology also but must use an ontology structure that can preserve context.
* There are a huge number of business processes in healthcare. Each business process is associated with a requirement to exchange data that is relevant to the business. This is partly achieve by assigning '''''types''''' to entries. Types indicate the main purpose of the entry. An agreement as to what the types are, and consequently, what the associated attributes of an entry of a type should be, and what the values of the attributes should be, is essential for business.
* It is generally the case that an entry can be considered as either representing an event in time (a different use of the word event) or a persistent state. Technically these categories are conceptual rather than real but are important for business level modelling. For example, a date of birth might be considered as a state and therefore might be modelled as a cardinal of 1 against a person, even though a series of historical entries have recorded a date of birth. State can be described by the use of types to indicate state versus event entries to indicate things that happened but do not persist. Many types are both.
* Put together this equates to an ontology of concepts which are used as types, attributes and values, together with structural definitions of their relationships for context and business purpose. Terms used to describe these things are purely convention ; resources, resource profiles, archetypes, templates, value sets, dataset definitions are all simply ontological relationships.
* All of this is irrelevant unless entries can be queried. Query itself produces new structures such as the above. Consequently a means of querying a records, which are projected as a graph is needed.<br />
== Model structure and content ==
Surprisingly, with the use of an agreed ontology and an agreed way of representing it via an open standard language such as the [[Health Information modelling language|information modelling language]], there is no real need to have one model structure.
Content of a model, including the definition of types,  is driven entirely by the business which it is designed to support. A specialist in immunology is likely to need different content than a General Practitioner. However, there needs to be  an agreement on what the concepts in use mean, particularly in context. Otherwise data cannot be exchanged.
The information modelling language means that one can have as many information model instances as needed. The language is like any other language but with some logical constraints. It may be possible to model the novel of War and Peace, but to state that "it was the best of times, it was the worst of times" is NOT allowed.
Thus the common information model is in fact no more than a model that models information as used in a common way. The idea that somehow models can be "Standardised", is somewhat quaint unless the business itself is standardised. If the business is standardised (i.e. everyone agrees to do the same thing) then a common model is a standard.
Thus in the Endeavour Discovery model the only standardisation is:
# The basic assumptions as to the difference between events, entries, and their provenance
# The selection of the best fit ontologies for particular purposes, as long as those ontologies conform with the information model language constructs, which enable world wide adoption by the systems that already use the language
For the content of the models themselves this can be accessed through the IM viewer (under development) or by downloading the model and viewing via a generic RDF graph viewer.
The approach to modelling covers 3 aspects of health record information:
# Models of data stored in health records and their supporting records.
# Ways of retrieving data both from the model itself and the health record data stored, i.e. various forms of query.
# Models of maps between originally entered data and a selected model designed so that one semantically defined query will pick up data entered in a variety of ways.
Ideally a model should be designed both for human visualisation and for computers to use. This is the approach taken to the Discovery information model.
This article describes the meta data model of an information model (and does not include the content of a particular model. The article makes reference to the languages that may be used to access the model, using either interoperability standards or a pragmatic approach, and this language is described in the article introducing the [[Health Information modelling language - overview|health modelling language.]]
The information model component types can be illustrated as follows:
[[File:Modelling Components.png|center|thumb|800x800px|Main information model component types]]

Revision as of 16:38, 5 May 2022

This article describes the approach taken to producing information models, including ; what they are, what their purpose is, and what the technical components of the models are.

The article does not include the content of any particular model.

What is the health information model (IM) and what is its purpose?

The IM is a representation of the meaning and structure of data held in the electronic records of the health and social care sector, together with libraries of query, value sets, concept sets, data set definitions and mappings.

The main purpose is to bridge the chasm that exists between highly technical digital representations and plain language so that when questions are asked of data, a lay person could use plain language without prior knowledge of the underlying models.

It is a computable abstract logical model, not a physical structure or schema. "computable" means that operational software operates directly from the model artefacts, as opposed to using the model for illustration purposes. As a logical model it models data that may be physically held any a variety of different types of data stores, including relational or graph data stores. Because the model is independent of the physical schemas, the model itself has to be interoperable and without any proprietary lock in.

The IM is a broad model that integrates a set of different approaches to modelling using a common ontology. The components of the model are:

  1. A set of ontologies, which is a vocabulary and definitions of the concepts used in healthcare, or more simply put, a vocabulary of health. The ontologies is made up of the world's leading ontology Snomed-CT, with a London extensions, various code based taxonomies (e.g. ICD10, Read, supplier codes and local codes)
  2. A common data model, which is a set of classes and properties, using the vocabulary, that represent the data and relationships as published by live systems that have published data, Note that this data model is NOT a standard model but a collated set of entities and relationships bound to the concepts based on real data, that are mapped to a common model.
  3. A library of business specific concept value sets, (aka reference sets) which are expression constraints on the ontology for the purpose of query
  4. A catalogue of reference data such as geographical areas, organisations and people derived and updated from public resources.
  5. A library of Data set (query) definitions for querying and extracting instance data from the information model, reference data, or health records.
  6. A set of maps creating mappings between published concepts and the core ontology as well as structural mappings between submitted data and the data model.
  7. An open source set of utilities that can be used to browse, search, or maintain the model.


Model building blocks and visualisation

The model consists of classes, sets and objects that are instances of classes.

Ethnicity

Objects can act as objects in their own rights (e.g. an instance of chest pain) or may also act as classes (e.g. the class of objects that are chest pain). Likewise sets have members that are objects and the objects may also act as classes or sets. For example a set for the 2011 Ethnicity census will contain a member object of "British" which is also a set with members such as English and so on.

The model itself is stored as an RDF based knowledge graph, which means it is implementable in any mainstream Graph database technology. There are no vendor specific extensions to RDF.

In line with the RDF standard, all persistent types, classes, , property identifiers and object value identifiers are uniquely named using international resource identifiers. In most cases the identifiers are externally provided (e.g. Snomed-CT identifiers) whilst in others that have been created for a particular model. Organisations that author elements of the models use their own identifiers.

From a data modelling perspective the arrangements of types may be referred to as archetypes, which are conceptually similar to FHIR profiles. In the semantic web world they would be considered "shapes". There are an unlimited number of these which frees the model from any particular conventional relational database schema. Inheritance of types is supported which enables broad classifications of types and re-usability.

The variation between the parts of the model that model terminology concepts and those that model data use slightly different grammars in keeping with their different purposes. The information model language describes the differences.

The models can be viewed in their raw technical form (in JSON or Turtle) or can be viewed by the information model viewer at the online tool Information model directory

Information model language

Main article information modelling language describes the language in more detail.

The semantic web approach is adopted for the purposes of identifiers and grammar. In this approach, data can be described via the use of a plain language grammar consisting of a subject, a predicate, and an object; A triple, with an additional context referred to as a graph or RDF data set. The theory is that all health data can be described in this way (with predicates being extended to include functions).

However, the semantic web languages are highly complex and a set of more pragmatic approaches are taken for the more specialised structures.

The consequence of this approach is that W3C web standards can be used such as the use of Resource Descriptor Framework or RDF. This sees the world as a set of triples (subject/ predicate/ object) with some things named and somethings anonymous. Systems that adopt this approach can exchange data in a way that the semantics can be preserved. Whilst RDF is an incredibly arcane language at a machine level, the things it can describe can be very intuitive when represented visually. In other words the Information modelling approach involves an RDF Graph.

In addition to semantic web languages, other commonly used languages are in place are used to enable the model to be accessed by more people.. For example the Snomed-CT expression constraint language is a common way of defining concept sets. ECL is logically equivalent to a closed world query on an open world OWL ontology. The IM language uses the semantic language of SPARQL together with entailment to model ECL but ECL can be exported or used as input as an alternative.