Discovery health information model

From Discovery Data Service
Jump to navigation Jump to search

This article describes the approach taken to producing information models, including ; what they are, what their purpose is, and what the technical components of the models are.

The article does not include the content of any particular model.

What is the health information model (IM)?

The IM is a representation of the meaning and structure of data held in the electronic records of the health and social care sector, together with libraries of query, extract and mappings.

It is a computable abstract logical model, not a physical structure or schema. "computable" means that operational software operates directly from the model artefacts, as opposed to using the model for illustration purposes. As a logical model it models data that may be physically held any a variety of different types of data stores, including relational or graph data stores. Because the model is independent of the physical schemas, the model itself has to be interoperable and without any proprietary lock in.

The IM is a broad model that integrates a set of different approaches to modelling using a common ontology. The components of the model are:

  1. A concept ontology, which is a vocabulary and definitions of the concepts used in healthcare, or more simply put, a vocabulary of health. The ontology is made up of the world's leading ontology Snomed-CT, with a London extensions, various code based taxonomies (e.g. ICD10, Read, supplier codes and local codes)
  2. A data model, which is a set of classes and properties, using the vocabulary, that represent the data and relationships as published by live systems that have published data, Note that this data model is NOT a standard model but a collated set of entities and relationships bound to the concepts based on real data, that are mapped to a common model.
  3. A library of business specific concept and value sets, which are expression constraints on the ontology for the purpose of query
  4. A catalogue of reference data such as geographical areas, organisations and people derived and updated from public resources.
  5. A library of Queries for querying and extracting instance data from reference data or health records.
  6. A set of maps creating mappings between published concepts and the core ontology as well as structural mappings between submitted data and the data model.
  7. A super language including the main semantic web vocabularies (RDF,RDFS,OWL2,SHACL) as well as a set of Discovery vocabularies designed for health data modelling.
  8. A query model, which is a high level model of processes and queries held in the query library and directly mapped to mainstream query languages such as SPARQL and SQL.
  9. An open source set of utilities that can be used to browse, search, or maintain the model.

The remainder of this article considers how models and ontologies can be constructed using this approach.

What is different?

The main difference between the Discovery IM and other approaches is the harmonisation of the terms used in the conventional 'terminology' domain and the terms used in the conventional 'data model' domain. Both are considered part of the one ontology, with one combined language, albeit with different grammar for the different parts of the model.

For example, an encounter record entry may be defined as a record of an "interaction between a patient (or on behalf of the patient) and a health professional or health provider". The encounter entry is bound to the concept of encounter which is itself semantically defined. In other words the data model of an entry of an encounter links to the type of encounter it is a record of.

The two disciplines (Description logic and data model schema constraints) are different, but they are obviously related. The binding between a data model and the range of values that should be applied to a property of an entry creates an interdependency, making sure that the data model and the values are synchronised.

The data model does not use the idea of "tables". Tables in the relational database sense of the word may be used to implement the model. There are an unlimited number of data model entity types, each one varying according to their properties, and arranged in a class hierarchy. If records are implemented in a graph data base there would be a 1:1 relationship between a data model shape and a type, but if implemented in a database the number of tables could vary from ONE to ANY number, depending on performance and maintenance factors.
Benefits of harmonisation accrue in user interfaces. For example, is a user elects to search for a systolic blood pressure, the application can use the information model to discovery that an entry for a systolic blood pressure will have a date and probably a numeric value.

Conceptualisation

Types of data as a graph

The data in a health record stored can be conceptualised as a set of relationships between one thing and many others.

Some people call this a graph. Others call these objects , properties and values. From a grammatical language perspective they are subjects, predicates and objects.

The example on the right is entirely arbitrary but illustrates a problem. What does "condition record" mean, or indeed what is a "condition"? Why is a patient linked to a person and what does "linked to" mean?

The answer is that the "terms" used in a model should be derived from a vocabulary whose terms have meaning and are formally defined. Some terms have meaning in whatever context they are used whereas others have different meanings in different contexts. In defining terms, it is necessary to defined them precisely enough for a computer to interpret the meaning safely i.e. the context of an idea is part of the idea itself.

The most difficult challenge is to agree the definition and meaning of the concepts in the context they are used. The agreement as to a particular model is less important. A definition defines a concept in relation to other concepts. Within a domain of interest such as healthcare, all concepts are indirectly related in some way to all other concepts in that domain.

W3C semantic web standards have evolved to enable machine readable definitions. The ubiquitous JSON enables these to be used by modern software applications, whereas the W3C Turtle language provides a slightly more human readable version.


Information model language

Main article information modelling language describes the language in more detail.

The semantic web approach is adopted. In this approach, data can be described via the use of a plain language grammar consisting of a subject, a predicate, and an object; A triple, with an additional context referred to as a graph or RDF data set. The theory is that all health data can be described in this way (with predicates being extended to include functions).

The modelling language also models process e.g. query and steps in a set of query. These are held as JSON serializable objects and translatable to standard query languages.

The consequence of this approach is that W3C web standards can be used such as the use of Resource Descriptor Framework or RDF. This sees the world as a set of triples (subject/ predicate/ object) with some things named and somethings anonymous. Systems that adopt this approach can exchange data in a way that the semantics can be preserved. Whilst RDF is an incredibly arcane language at a machine level, the things it can describe can be very intuitive when represented visually. In other words the Information modelling approach involves an RDF Graph.

To populate the data models and ontologies , the semantic web languages of RDF, RDFS, OWL2 DL and SHACL are used main languages with SPARQL as its "target" run time query language.

In additions mappings to other commonly used languages are in place are constructed to enable the model to be used. For example the Snomed-CT expression constraint language is a common way of defining concept sets. ECL is logically equivalent to a closed world query on an open world OWL ontology. The IM language uses the semantic language of SPARQL together with entailment to model ECL but ECL can be exported or used as input as an alternative.

Mapping from published data

main article mapping concepts and structures

which describes the current mapping and approach to mappings of codes or text, in context from publisher systems.

Information manager

Main article information model services.For an information model to be useable, it has to be accessible in some way either via user interfaces or by APIS. Thus the information model comes with a set of open source modules making up an application "Information manager" , which is a web based application designed to show the model.

For a web application or set of APIs to be useful there has to be at least one service. There is a free to use information model service, i.e. an operational service that provides access to one or more information models.

The service provides a set of APIs as well as provide instances of the model for implementations to use directly should they wish to.

All implementation code including the evolving service, APIs, language grammars and object models are also available on Github in the following repositories:

https://github.com/endeavourhealth-discovery/IMAPI

A viewer of the information model and an early version of the manager is at:

https://github.com/endeavourhealth-discovery/IMViewer