Saturday, July 31, 2021

The CDISC CORE project and open, machine-executable rules

 This Spring, CDISC announced a new initiative, to develop, in collaboration with Microsoft, open source software and machine-executable conformance rules for validation of implementations of a number of its foundational standards: the CORE project.
CORE stands for "CDISC Open Rules Engine". A very good webinar was given last week, explaining the principles and phases of the project. You can find the recording here.

In order to explain the "why" of the project, we need a bit of history ...

Until about 5 years ago, CDISC did not publish conformance rules for its most used standards: SDTM, SEND, ADaM, ODM and Define-XML.As far as I could find out, the first CDISC conformance rules were published in 2017 for SDTMIG-3.2. The probable reason for this is that there was a "leave it to the vendors to implement" policy at CDISC before that time.

Several private initiatives and companies filled the gap, including our own company: already 15 years ago, we developed software to validate ODM files against the standard, the ODM Checker. The software has regularly been updated and is still available at no cost to CDISC members. It has also been extended for Define-XML. These were also integrated into commercial products such as our popular SDTM-ETL mapping software and Define-XML Designer. We did however never publish the rules as "open source", as we did not want to pretend that our interpretation of the ODM/Define-XML standard is the ultimate correct one.

For SDTM and later also for SEND and ADaM, other companies developed software for conformance checking, some first as open source, later more and more as closed source. The conformance rules, with these companies own interpretations of the Implementation Guides, were published as Excel files at the best, and only contained a (sometimes vague) description of each of the rules, or even only the error message that is generated. Some of such rules are even completely wrong, such as "Original Units (--ORRESU) should not be NULL, when Result or Finding in Original Units (--ORRES) is provided". In no case at all, the rules were published as "machine-executable".
One company even managed to sell its solution to the FDA and later also to the PMDA and NMPA, essentially developing the rules on behalf of these regulatory authorities. Although that software is known for its many false positives, it is used by almost every pharma company. Another vendor however managed to find its place in the market by specializing on SEND, later extending to SDTM.

About 5 years ago, CDISC started publishing its own conformance rules, first for the SDTMIG, and later also for the SENDIG and for ADaM. Also here, the conformance rules were published in the form of Excel files, with the rules themselves coming as a text description, i.e. without machine-executable code. Recently, also conformance rules for Define-XML v.2.1 were published by CDISC. Also here, no machine-executable code was published. 

For both Define-XML 2.0 as well as 2.1 such machine-executable rules however exist in the form of a Schematron, a technology used by several vendors (including ourselves), but these machine-executable rules are propriety, and are used in different free or commercial software products.

This leads to a discussion about technology for conformance rules (if you don't like technology discussions, you can skip this part). CDISC ODM and Define-XML are based on XML, a worldwide open standard from the World Wide Web consortium W3C, who also, together with ISO, took care of developing technologies and standards for checking the conformance of XML files against a given standard: XML-Schema Schematron and XQuery. The ODM and Define-XML teams have always published their ODM and Define-XML specifications together with an XML-Schema, meaning that 60-80% of the rules are already exactly described in a language that is both human-readable (well, for IT people at least), as well as machine-executable. The remaining 20-40% then can then be implemented using Schematron and/or XQuery, the latter especially when information is in separate files. Also this has been done by us and by other vendors.
The problem with this approach however is that these technologies are limited to XML. Although there was a believe in the past that data exchange would become an "XML-only world", this has not come true. At one side, the format for submissions to regulatory authorities is still SAS Transport 5, a completely outdated "punch card" format from the IBM mainframe time, and mandated by FDA, PMDA and NMPA, this although CDISC has a much better format (which provides a perfect match with Define-XML), known as Dataset-XML. This modern format even allows to develop "smart review tools", such as the open source "Smart Submission Dataset Viewer". On the other hand, other formats become very popular, especially JSON and, for linked data using RDF, the "Turtle" format and JSON-LD format. One could think that the latter two would not be suitable for CDISC's submission standards (SDTM, SEND, ADaM), as these represent tabular data. One should however not forget that also data in these "tables" are essentially linked data, but that the relations are implicit, only described in Implementation Guides. And as these relations are implicit, in order to check these relations, one needs ... conformance rules. Regarding JSON, it's usage has overtaken XML for use with APIs and in RESTful web services.
The still mandated use of SAS Transport 5, a format even discouraged by the US Library of the Congress adds another one to the list of formats used in clinical research.
Given this variety of exchange formats, one can think about whether it is possible to have a single expression language for developing conformance rules that is both human-readable (i.e. really "open") and machine-executable, and that can be used with all these modern as well as the outdated SAS Transport 5 format.
I have been looking for such as expression language for many years, but failed miserably to find one. This will surely be one of the main challenges of the CORE project.

The idea of publishing completely open conformance rules for the CDISC submission standards that are as well human-readable as well as machine-executable is not new - it has even been realized by the "OpenRules for CDISC Standards" initiative.

 

The rules have been developed using the XQuery language, a W3C standard, and some of them even use the CDISC Library API. So, essentially, they could serve as a candidate for a reference implementation if it were not that XQuery only works for XML (and thus only for CDISC's Dataset-XML), but is useless for use with SAS Transport 5, as well as for modern JSON, which is the first choice for use with APIs.
"Open Rules for CDISC Standards" was developed many years ago with the expectation that FDA would soon replace SAS Transport 5 by CDISC's own Dataset-XML, but this has not happened. With JSON strongly coming up, it has become clear that the sole use of XQuery for describing human-readable rules that also are machine-executable, is not an option anymore.
One of the great starting principles of the "Open Rules for CDISC Standards" was that the rules themselves (in XQuery) are completely separated from any execution engine: implementers can choose between many different computer languages (Java, C#, Python, ...) that read the rules and then execute them. Separation between rules and execution engine will very probably also be one of the major design principles of the CDISC CORE project. 

For the development of machine-executable conformance rules in CORE, CDISC will start with SDTMIG 3.4 and SDTM 2.0 in the first phase, which will be an "MVP" phase. No, MVP doesn't mean "most valuable player" here, it means "Minimum Viable Product". In later phases, machine-executable conformance rules for the other standards, on the long term maybe even including Therapeutic Area User Guides (TAUGs).

The execution engine will be developed by the CORE team which is a collaboration between Microsoft, CDISC, and industry, and run in the cloud using Azure. CDISC members will be able to obtain an evaluation Azure account, which is then essentially acting as a private cloud for a CORE implementation. Implementers, such as pharma companies, CROs, service providers, can later choose to use the open source code and spin up instances in other cloud environments. They will also be able to add own (e.g. company-specific) rules to their execution engine (whether a Microsoft or other one) and/or to develop their own implementation, either open source or closed source. So, one can indeed think as CORE providing a "reference implementation" that anyone can use, extend, or just use as "the reference", i.e. the outcomes of any conformance checking must be identical to that of the reference implementation, even when completely different technology is used. The CDISC Library will be the single source of truth for both the CDISC standards and the CDISC rules since the rules will be made available in the Library. The CORE Engine will retrieve the rules and standards from the Library using an API.

This means that in the MVP phase, involvement of vendors will be relatively low, but as of phase 1, it is expected that there will be a lot of involvement from vendors, either directly working together with CDISC and Microsoft (as we envisage to do), or just go their own way, using the open source. As everything will be open source, vendors can also choose between offering products that use a cloud execution engine, or create a solution that uses local production environments (e.g. desktop applications). 

This is a big project. There will be quite a lot of CDISC teams involved, for example a QA team, a number of conformance rules development teams (Conformance rules for SDTMIG 3.4 / SDTM 2.0 are expected to be published in Autumn 2021), the CDISC Library team, and a software engineering team, which I presume will consist of a mix of CDISC and Microsoft people. And of course, the overall architecture must be developed and project management must be taken care of by a CORE Leadership team. More details can be found on the slides of the webinar recording.

Executable rules will be metadata driven (of course!), but it has not been decided yet what programming language will be used to make them machine-executable. Personally, I consider this as a critical part of the project, as the people developing the rules (standards specialists, e.g. SDTMIG specialists, mostly volunteers) usually are not programmers (no, I do not expect that the rules will be developed in SAS, this would be a major design error, as that would not be vendor-neutral), and the programmers (with good knowledge of Java, C#, Python ...) usually do not have a good knowledge about the SDTM standard and their Implementation Guides. So a lot of communication (and documentation of that communication) will be necessary between these two groups: the last we want to have is that CORE (based) software produces false positive warnings and errors ...

Another critical part will surely be QA and testing: what rule is applicable when, and are all the possible scenarios covered? Testing will require a huge amount of test data, covering every possible use of the standard.

All this work cannot be done by CDISC and Microsoft people alone. Therefore, CDISC is seeking for volunteers out of the CDISC community for the different teams involved in the project, in the MVP phase probably mostly SDTM specialists for development of very precise rules: only when a rule is very precisely defined, it can be made machine-executable (see the FDA rule on -ORRESU ...). So the webinar also contained a "call for participation". CDISC would like to have volunteers from the community that can at least spend 20% of their time in the next 9 months, and ideally, beyond that period. A kick-off meeting is already planned for September 9th, so there is not much time to lose.
The call for participation can be found on the CORE website under "participate" and contains a list of what teams and roles there are. This will be your starting point if you want to participate.

A very interesting part of every CDISC webinar is the Q&A. In this case, not every question that came in could be answered (but some bundling took place). CDISC however promised that all question will be answered and will be posted on the CDISC website. A few highlights:
- the reporting format will be anything the user wants. This is possible as there will be a rich set of API methods available. An Excel report interface will surely be provided.
- it is not decided yet what programming language will be used. Will it be a 3th or 4th generation language (Java, C#, Python, ...) or a meta-language that can be interpreted and translated to source code in one of these languages? As my own attempts failed to find such a language that is independent of the data format, I am of course very curious ...
Important however is that the rules implementation will be metadata driven, and that it needs to be relatively easy for members of the CDISC community (sponsors, CROs, ...) to develop their own specific rules as machine-executable rules and implement them in exactly the same way as the CDISC rules themselves.
- it is intended to also include rules developed by regulatory authorities (FDA, PMDA, NMPA, ...), so the scope is not limited to CDISC rules alone.
- as the CORE software will be enriched by a large number of API methods, it will be easy to integrate CORE into third-party applications. CDISC will carefully listen to the vendor community to find out which API methods are necessary.
- although there of course a number of deliverables and time lines, the conformance rules will never be complete as long as new standards are developed. When new standards and versions are published, it is envisaged that they immediately come with their own set of machine-executable CORE rules, which can then be implemented immediately.
- upon the question "will the regulatory agencies adopt the CORE rules?", the answer essentially was "we don't know, but we strongly hope so". One should not forget that in some cases, these agencies rules deviate considerably from CDISC rules, and that currently, some of these agency rules are just wrong (e.g. the FDA rule "Original Units (--ORRESU) should not be NULL, when Result or Finding in Original Units (--ORRES) is provided"), ambiguous, or even incorrectly implemented in software. However, rules from the regulatory agencies are surely within the scope of the project.
- another question was about how these conformance rules differ from already existing applications and vendors". Peter van Reusel answered it very exactly: "not much". The difference is however that for the first time, the implementation will not only be fully transparent, but also independent of the execution engine. Also, the rules themselves will be maintained by the CDISC community, and that with each new version of a standard, the corresponding conformance rules, in an open, machine-readable form, will be immediately available. Venkata Maguluri (Pfizer) of the webinar's panel also added that these rules will not allow any "wiggle room" anymore in their interpretation. Personally, I consider this "wiggle room" (sometimes even "cleft room") as one of the major problems of the way the rules are currently described and published (as "text" in Excel format),especially with the regulatory authorities published rules having severe quality problems.
- the period between publication of a new standard version and adoption by regulatory authorities will be used to ensure that the conformance rules also work for the agencies. Also, CDISC will provide the agencies with technical support regarding the implementation of the execution engines. 

Last but not least, what does this all mean for our "Open Rules for CDISC Standards" project?
We are very enthusiastic about the CORE project at CDISC. Although much larger in scope and volume, the basic principles have a lot of overlap with our own "Open Rules for CDISC Standards": in both cases, the rules are fully transparent, human- as well as machine-executable, and separate from the execution engine. Both use or will use the CDISC Library as "the single source of truth". The major thinking error we made when starting with "Open Rules for CDISC Standards" many years ago, was that we expected that it would soon be an "all-XML world" for data exchange, and especially that FDA (and then followed by PMDA/NMPA) would soon move away from SAS Transport 5 and start requesting modern (CDISC's own) Dataset-XML format for electronic submissions. None of these have come true: FDA still requires outdated SAS Transport 5, and instead of becoming an "all-XML world", JSON has become a very important player, especially in combination with APIs and RESTful web services (which I consider as the future, also for submissions). Also RDF has become important as a methodology that makes implicit SDTM/SEND/ADaM relations explicit instead of completely hidden in PDF or HTML files.
However, I hope "Open Rules for CDISC Standards" becomes a source of inspiration, and maybe even a good number of rules implemented as XQuery can directly be translated into machine-executable CORE rules.

Not everything is clear yet, e.g. the expression (machine) language for defining the rules, and that works for any modern data format as well as outdated SAS Transport 5, has not been decided yet. This is where I failed miserably myself. Of course, Microsoft has much more resources and a lot of brilliant people to find out what that expression language should be.

Also, I do have some serious concerns that in the MVP phase, for SDTM, one will limit to an implementation that only works for SAS Transport 5, and that implementation for other formats will only be done at a later stage. That would be a wrong signal to the FDA not to look for any other alternative, modern format, and provide them with the excuse that they cannot move to a modern format, as CORE only supports SAS Transport 5, although that is essentially not true. In my opinion,  interfaces and APIs can take care that a large range of import formats can be supported, and that should already go into the requirements, even in the MVP phase. Maybe I can provide some input or technical support for that part of the project: we should not forget that developing APIs and implementing them as RESTful web services is relatively easy for JSON and XML, but that no RESTful web service has ever been developed yet that supports SAS Transport 5. I expect that to be more difficult than for JSON and XML.

Another concern regarding possible (Microsoft) vendor lock-in has already been taken away during the web conference: it will be possible to use any other cloud provider or to develop local (e.g. desktop) applications.

CORE is a very ambitious project. It is even considerably larger than the CDISC Library project. The huge success of the latter however makes me confident that also CORE will be a huge success.

And please, do not forget to watch the recording of the webinar if you did not attend the webinar already: you can find it here.

 

 

 

 

 

 

 

 

 

 

 

Monday, March 1, 2021

LOINC-SDTM mapping for Drug and Toxicology Lab Test

This week I started working on a mapping between LOINC codes for Drug and Toxicology lab tests (LOINC class "DRUG/TOX") and the CDISC SDTM LB domain and controlled terminology (CT) for it.
This work is not only important for sponsors and CROs who obtain lab results accompanied by the LOINC code (which should be the routine nowadays), and need to generate SDTM datasets, but also for being able to use "Real World Data" (RWD) data e.g. from Electronic Health Records (EHRs). It is also of utmost important for being able to (semi-)automatically generate CDISC Biomedical Concepts (BCs) from LOINC panel codes (groups of LOINC codes for tests that logically belong together), a topic on which I will speak (and perform a demo) at the European CDISC Interchange 2021 in April .

The task is however, at first look, enormous: this class contains 8314 LOINC codes (LOINC v.2.69) with 2605 distinct values for the analyte (LOINC "Component").The published CDISC-LB mapping only contains mappings for 852 DRUG/TOX LOINC codes, so, there are still 1800 "to go". Some of the work can however be automated, but it still remains a lot of work...

I first retrieved all the DRUG/TOX LOINC codes with its attributes from my local install of the LOINC database, and generated 2 worksheets (yes, I sometimes do use Excel), one with all the codes that have more than one target CDISC specimen type (LBSPEC), like for LOINC System= "Ser/Plas" ("Serum or Plasma"), as these require more than 1 mapping row in the final database. E.g. for "Ser/Plas", this will lead to 3 rows, one with LBSPEC="SERUM" (NCI code C13325), one with LBSPEC="PLASMA" (NCI code C13356) and one with LBSPEC="SERUM OR PLASMA" (NCI code C105706). The second worksheet then contains all the DRUG/TOX LOINC codes where a 1:1 mapping between the LOINC "System" and LBSPEC is expected.

Some of the work can be automated. For most of the LOINC "System" values, a mapping to LBSPEC already exists and can easily be reused. Some additional work may have to be done for the mapping between the LOINC "Method" and LBMETHOD. Also attention has to paid to fasting statuses and "challenges" and "post-dose" entries (if any). But most of the manual work is on mapping the analyte (LOINC "Component") to LBTESTCD/LBTEST, as this is essentially the meaning of the LBTESTCD/LBTEST pair: it represents the analyte, i.e. the compound that is measured.
What is represented by --TESTCD/--TEST pairs in SDTM differs between domains. For example, in Vital Signs (VS), VSTESTCD/VSTEST represents the property that is measured (e.g. a blood pressure). The property that is measured is not directly represented by a variable in LB. For example, if a concentration is measured, this can in LB only be seen from the actual values and units. In LOINC however, the "Property" is an essential part of the concept (one of the 5/6 "dimensions" of LOINC). In the by CDISC published LOINC-LB mapping this has been solved by adding some "Non-Standard Variables" (NSVs) which then go into the SUPPLB dataset.

Then I started the huge work ...

For generating the mapping between the LOINC "Component" (i.e. the analyte) and LBTESTCD and LBTEST, I used the CDISC Library Browser which was of great help because it also displays "similar" ways of writing a term as well as synonyms. It also allows me to immediately add the CDISC-NCI code of LBTESTCD/LBTEST to the mapping, which is of utmost importance for connecting to other coding systems used in healthcare (like SNOMED-CT), e.g. using the Unified Medical Language System UMLS and its API and RESTful web services.

Here is a picture of a few rows of the mapping:


 As I found out soon, the coverage of test codes for drug and toxicology lab testing in the CDISC-CT for LBTESTCD/LBTEST is very poor. After one day of mapping work, I estimates the coverage to be between 5 and 10%. This also means that for 100 drug/toxicology lab tests, we would need to to 90-95 "new term requests" to CDISC for a LBTESTCD/LBTEST. Considering the 1800 codes not covered yet by the original LOINC-LB mapping, this would mean something like 1600 to 1700 "new term requests". I guess the CDISC-CT team will "not be amused" ...

This urged me to rethink the problem.

Mapping is "bad" - personally I think it should be the last resort if nothing else works. 1:1 mapping can still be acceptable (but requires a large amount of work), but we are in deep trouble when such a 1:1 mapping is not possible.

Each unique LOINC "component" (i.e. the analyte) has a code itself: the "LOINC Part Code" (LP-codes). For example, the LP code for "Albumin" is LP6118-6. The LP code for Glucose is LP14635-4. The LP code for Doxycycline (one of the many not covered by CDISC-CT) is LP14992-9. This brought me to the idea "Why not use the 'LOINC Part Code' for LBTESTCD?".

Similarly, one could then use the "LOINC Part Name" for LBTEST. 

There are a few major objections against this, some of them having to do with the by the FDA mandated use of outdated SAS Transport 5 format for submissions.
The first is that LBTESTCD may not be longer than 8 characters. "LP14992-9" has 9. Also the "LOINC Part Name" sometimes has more than 40 characters. Even if we drop the "LP" from the code, we still have a problem. For example for "LP14992-9" this would reduce the code to "14992-9" but the SDTM rules (for sake of SAS Transport 5) state that "Values of --TESTCD must be limited to eight characters and cannot start with a number, nor may they contain characters other than letters, numbers, or underscores". So even the dash "-" is not allowed ... Dropping the dash and the check digit is in my opinion not a good idea, as it is an important measure against typing errors. Remark that the rules for -TESTCD/-TEST are based on making "transposal" possible in XPT datasets.

So, what we see once again, is that the SAS Transport 5 format is a "show stopper" for any "out of the box thinking".

The second thing I found out is that, with extremely few exceptions", every of the LOINC "Component" values, i.e. the analyte has a SNOMED-CT code. For example, the SNOMED-CT code for Doxycyclineis 372478003.

So, why not use the SNOMED-CT code for the analyte LBTESTCD with the SNOMED-CT name for LBTEST?

OK. Same problem: SNOMED codes are often longer than 8 characters, and do start with a number, so they cannot be used for LBTESTCD due to this (stupid?) SDTM rule that is only there to satisfy the outdated SAS Transport 5 format. Using "LOINC Parts" and "SNOMED-CT" for test codes would also have the advantage that it provides links to other codes and terms. After all, both are "hierarchical" and "network" coding systems. CDISC-CT just is consisting of ... lists.
For example, medicinal products containing Doxycycline are characterized by the SNOMED-CT code 10504007. And a "parent" code of it is "Substance with antimalarial mechanism of action" with SNOMED-CT code 373287002.

Here is a nice diagram taken from the "SNOMED-CT browser":

Can one do something similar with CDISC-CT? No way ...

So, why isn't CDISC using SNOMED-CT at all (except in the SDTM Trial Summary (TS) domain)?

An explanation is found on the CDISC website in the "knowledge base":

The first argument (SNOMED license) is not entirely correct. It should say "most governments". Even in Europe, where we are far behind the US in using SNOMED-CT, there is almost no country anymore that does not have a country-license. Even then, the "Knowledge base" applies double standards: MedDRA is not free at all for anyone, one needs to have a (rather expensive) license. So arguing that some (a minority) would have to pay to use SNOMED-CT and at the same time mentioning that MedDRA is mandated by regulatory agencies, for which one always has to pay, is in my opinion not correct at least.

Also the second argument, that SNOMED-CT does not have "definitions" is entirely incorrect. Every SNOMED-CT term does have a definition.
Furthermore, the "network" properties of SNOMED-CT are not mentioned at all. They should.

Please do remark that I do not plead for replacing all CDISC-CT by SNOMED-CT. There are many cases where this doesn't make sense. What we should however do is start discussing the use of LOINC codes, LOINC parts for tests and possibly for post-coordination of test parts (where also SNOMED-CT does a better job), LOINC answers for standardized results and start discussing the better use of SNOMED-CT within CDISC and especially within submission standards, and stop trying to keep LOINC and SNOMED-CT "out of the door". It is also in the advantage of pharma sponsors to use these terminologies, and I strongly think that especially sponsors who want to start using "real world data" should push CDISC harder to embrace LOINC and SNOMED-CT, providing webinars, trainings, implementation guides, etc..

CDISC is a founding member of the "Joint Initiative Council for Global Health Informatics Standardization" (JIC), together with LOINC and SNOMED, but this seems to be reflected in our work only marginally. That is really a pity.

And, we should not forget, clinical research is only less than 5% of healthcare, and that other 95% is using SNOMED-CT and LOINC all the way ...

Reactions are as always very welcome!
And if you also feel that CDISC should take LOINC, UCUM, and SNOMED-CT more seriously, don't tell me, tell CDISC (e.g. the CSO).