Health Information modelling language - overview

From Endeavour Knowledge Base
Revision as of 06:09, 1 September 2022 by DavidStables (talk | contribs)

Purpose and scope of the language

This article describes the languages used for creating querying and maintaining the information model, as well as the means by which health record queries can be defined in a system independent manner.

As the information model is an RDF graph, the modelling language uses the main stream semantic web languages. In addition there is a pragmatic json-LD based domain specific language for query definition which maps to plain language description, and maps to a constraint of, and specialisation of, SPARQL with support for Graphql type queries.

The language includes description logic, shape constraints, expression constraints, and a pragmatic approach to modelling query of real data.

Details on the W3C standard languages that make up the grammar are described below. A link to the specification of the language grammar and syntax is here.

It should be noted that these are modelling languages, not the physical data schema or actual query. These are defined in the languages commensurate with the technology (e.g. sql)

The main purpose of a modelling language is to exchange data and information about information models in a way that machines can understand. It is not expected that clinicians would use the languages directly. The use of standard languages ensures that the models are interoperable across many domains including non health care domains.

The languages cover the following areas:

  1. An ontology, which is a vocabulary and definitions of the concepts used in healthcare, or more simply put, a vocabulary of health. The ontology is made up of the world's leading ontology Snomed-CT, with a London extension and supplemented with additional concepts for data modelling.
  2. A data model, which is a set of classes and properties, using the vocabulary, that represent the data and relationships as published by live systems that have published data to a data service that uses these models. Note that this data model is NOT a standard model but a collated set of entities and relationships bound to the concepts based on real data, that are mapped to a single model.
  3. A library of business specific concept and value sets, which are expression constraints on the ontology for the purpose of query
  4. A catalogue of reference data such as geographical areas, organisations and people derived and updated from public resources.
  5. A library of Queries for querying and extracting instance data from reference data or health records.
  6. A set of maps creating mappings between published concepts and the core ontology as well as structural mappings between submitted data and the data model.

Contributory languages

Health data can be conceptualised as a graph, and thus the model is a graph model.

When exchanging models using the language grammar both json-ld and turtle are supported as well as the more specialised syntaxes such as owl functional syntax or expression constraint language.

The modelling language is an amalgam of the following languages:

  • RDF. An information model can be modelled as a Graph i.e. a set of nodes and edges (nodes and relationships, nodes and properties). Likewise, health data can be modelled as a graph conforming to the information model graph. RDF Forms the statements describing the data. RDF in itself holds no semantics whatsoever. i.e. it is not practical to infer or validate or query based purely on an RDF structure. To use RDF it is necessary to provide semantic definitions for certain predicates and adopt certain conventions. In providing those semantic definitions, the predicates themselves can then be used to semantically define many other things. RDF can be represented using either TURTLE syntax or JSON-LD.
  • RDFS. This is the first of the semantic languages. It is used for the purposes of some of the ontology axioms such as subclasses, domains and ranges as well as the standard annotation properties such as 'label'
  • OWL2 DL. This is supported in the authoring phase, but is simplified within the model. This brings with it more sophisticated description logic such as equivalent classes and existential quantifications ,and is used in the ontology and for defining things when an open world assumption is required. This has contributed to the design of the IM languages but OWL is removed in the run time models with class expressions being replaced by RDFS subclass, and role groups.
  • SHACL. For the data models of types. Used for everything that defines the shape of data or logical entities and attributes. Although SHACL is designed for validation of RDF, as SHACL describes what things 'should be' it can be used as a data modelling language
  • SPARQL Used as the logical means of querying model conformant data (not to be confused with the actual query language used which may be SQL). Used as the query language for the IM and mapped from IM Query Health queries would generally use SQL
  • OpenSearch / Elastic. Used for complex free text query for fining concepts using the AWS OpenSearch DSL (derivative of Lucene Query). Note that simple free text Lucene indexing is supported by the IM database engines and is used in combined graph/text query.
  • IM Query Language, used as a bridge between plain language and the mainstream query languages such as SQL or SPARQL. This encapsulates the more complex underlying functions of the above query languages.

Example (OWL2) vs RDFS role groups

Consider a definition of chest pain

Chest pain
 is Equivalent to -> pain of truncal structure
                    and
                    has site -> Thoracic structure.

#When stored
Chest pain
   is sub class of -> pain of truncal structure
   role group ->
          role ->
            has site -> Thoracic structure.

Grammars and syntaxes

Foundation syntaxes - RDF, TURTLE and JSON-LD

Discovery language has its own Grammars built on the foundations of the W3C RDF grammars:

  • A terse abbreviated language, TURTLE
  • SPARQL for query
  • JSON-LD representation, which can used by systems that prefer JSON, wish to use standard approaches, and are able to resolve identifiers via the JSON-LD context structure.

Identifiers, aliasing prefixes and context

Concepts are identified and referenced by the use of International resource identifiers (IRIs).

Identifiers are universal and presented in one of the following forms:

  1. Full IRI (International resource identifier) which is the fully resolved identifier encompassed by <>
  2. Abbreviated IRI a Prefix followed by a ":" followed by the local name which is resolved to a full IRI
  3. Aliases. The core language tokens (that are themselves concepts) have aliases for ease of use. For example rdfs:subClassOf is aliased to subClassOf,

There is of course nothing to stop applications using their own aliases and when used with JSON-LD @context may be used to enable the use of aliases.

Data is considered to be linked across the world, which means that IRIs are the main identifiers. However, IRIs can be unwieldy to use and some of the languages such as GRAPH-QL do not use them. Furthermore, when used in JSON, (the main exchange syntax via APIs) they can cause significant bloat. Also, identifiers such as codes or terms have often been created for local use in local single systems and in isolation are ambiguous.

To create linked data from local identifiers or vocabulary, the concept of Context is applied. The main form of context in use are:

  1. PREFIX declaration for IRIs, which enable the use of abbreviated IRIs. This approach is used in OWL, RDF turtle, SHACL and Discovery itself.
  2. VOCABULAR CONTEXT declaration for both IRIs and other tokens. This approach is used in JSON-LD which converts local JSON properties and objects into linked data identifiers via the @context keyword. This enables applications that know their context to use simple identifiers such as aliases.
  3. MAPPING CONTEXT definitions for system level vocabularies. This provides sufficient context to uniquely identify a local code or term by including details such as the health care provider, the system and the table within a system. In essence a specialised class with the various property values making up the context.

OWL2 and RDFS

For the purposes of authoring and reasoning the semantic ontology axiom and class expression vocabulary uses the tokens and structure from the OWL2 profile OWL EL , which itself is a sublanguage of the OWL2 language

In addition to the open world assumption of OWL, RDFS constructs of domain and ranges (OWL DL) but are are used in a closed word manner as RDFS.

Within an information model instance itself the data relationships are held on their post inferred closed form i.e. inferred properties and relationships are explicitly stated using a normalisation process to eliminate duplications from super types. In other words, whereas an ontology may be authored using the open world assumption, prior to population of the live IM, classifications and inheritance are resolved. This uses the same approach as followed by Snomed-CT, whereby the inferred relationship containing the inherited properties and the "isa" relationship are included explicitly.

In the live IM OWL Axioms are replaced with the RDFS standard terms and simplified. For example OWL existential quantifications are mapped to "role groups" in line with Snomed-CT.

Use of Annotation properties

Annotation properties are the properties that provide information beyond that needed for reasoning.  They form no part in the ontological reasoning, but without them, the information model would be impossible for most people to understand. 

Typical annotation properties are names and descriptions.

Owl construct usage examples IM live conversion
Class An entity that is a class concept e.g. A snomed-ct concept or a general concept rdfs:Class
ObjectProperty 'hasSubject' (an observation has a subject that is a patient) rdf:Property
DataProperty 'dateOfBirth' (a patient record has a date of birth attribute owl:dataTypeProperty
annotationProperty 'description' (a concept has a description)
SubClassOf Patient is a subclass of a Person rdfs:subClassOf
Equivalent To Adverse reaction to Atenolol is equivalent to An adverse reaction to a drug AND has causative agent of Atenolol (substance) rdfs:subClassOf


Sub property of has responsible practitioner is a subproperty of has responsible agent rdfs:subPropertyOf
Property chain is sibling of'/ 'is parent of' / 'has parent' is a sub property chain of 'is first cousin of' owl:Property chain
Existential quantification ( ObjectSomeValuesFrom) Chest pain and

Finding site of - {some} thoracic structure

im:roleGroup
Object Intersection Chest pain is equivalent to pain of truncal structure AND finding in region of thorax AND finding site of thoracic structure rdfs:Subclass

+

role groups

DataType definition Date time is a restriction on a string with a regex that allows approximate dates
Property domain a property domain of has causative agent is allergic reaction rdfs:domain
Property range A property range of has causative agent is a substance rdfs:range
Annotation Meaning
rdfs:label The name or term for an entity
rdfs:comment the description of an entity


SHACL shapes - data model

The shapes constraint language, as in the semantic ontology, the language borrows the constructs from the W3C standard SHACL, which can also be represented in any of the RDF supporting languages such as TURTLE or JSON-LD.

Use of shacl property, shacl class and shacl node and shacl datatypes are the mainstay as described in the language grammar and syntax article.

Query language

As the IM itself is held as RDF quads (triple+ graphs) the IM can be queried using SPARQL for graph query and Lucene query for text query. The IM manager also supports a full Elastic (AWS OpenSearch) index for advanced text queries.

But as the IM is designed to support query on actual health records (usually in relational format), the IM also has to enable SQL query.

Both SPARQL and SQL are complex specialised languages and to program using these languages the user must not only be a technical expert, but must have intimate knowledge of the RDF schema and /or the specific target health record schema.

Ordinary people express query concepts in plain language and most queries can be expressed using logical statements from plain language.

IM employs a Json format domain specific language (DSL), that operates as an intermediary between plain language logical query statements, and the underlying query languages such as SQL or SPARQL.

Even a DSL is highly technical so the IM also provides a user friendly application that enables a lay person to construct highly complex health queries without the need to understand the query languages, or the technical storage formats of the IM.

The approach to the design of IM Query is to take the various logical plain language patterns and map them directly to a DSL query format , and provide direct maps between the Json query DSL objects and the relevant SPARQL or SQL

IM query is specified more fully in the article on information model query

Form generator

In order to maintain and edit the content of the information model there is a need to be able to build forms that enable something to be edited. Examples of things to edit are concepts, value sets, concept sets, queries (of the IM) , data models and maps.

The information model language uses an extension to SHACL shapes to enable form generation. Another way of putting it is that SHACL shapes define the structure and content of data, whereas a form generator provides instructions as to how a particular shape could be hand authored.

The language does not dictate the style or technology used in forms, only the things which a form based application would need when generating components on the screen.

The form generator language vocabulary and how it may be used is documented on the article on information model form generator language

Language Grammar documentation

The model language grammar and vocabulary are full specified in the article Language grammar and syntax.