Sunday, March 16, 2025

New PMDA rules for CDISC standards published March 2025

PMDA (the Japanese regulatory authorities) have recently published new rules for use of CDISC standards.

The Excel file with the rules can be downloaded from: https://www.pmda.go.jp/files/000274354.zip

Unfortunately, the rules are only published in the form of an Excel file, so not in a "vendor-neutral" format, and also barely or not usable for execution in software, also as it is "text only", so not containing any "machine-readable" instructions, even not in "meta-language". For example, I am missing "precondition" statements (except for "domain"), clearly stating (best in machine-readable form) when a rule is applicable or not. Preconditions act like filters, and are key in the description of rules to be applied in software.

With exception of additional rules, e.g. for SDTMIG-3.4 and some clarifications, there is not much new under the (Japanese) sun, even the format of the Excel worksheets hasn't changed since 2019. One may see this as a good sign of consistency, but I do have another opinion.

Even more problematic is that many of the rules keep being open for interpretation, meaning that different implementers (such as software vendors) may have a different implementations, leading to different results. That is of course unacceptable. CDISC CORE, as a "reference implementation", does a much better job here.

Furthermore, many of the rules have a too vague description (as was already the case in the past), and do not contain the necessary information so that any software company can implement them. So I ask myself whether these rules have really been developed by the PMDA, or by an external party that has interests of keeping the rule descriptions vague.

Lets have a look at a few examples:

What does this rule mean? It does not even mention the define.xml which is the place where value-level conditions are defined, and where one can find the associated codelists for value-level variables. And what is meant with "extensible codelist"? Is the CDISC codelist meant? Or the "extended" (not "extensible) CDISC-codelist in the define.xml?
So, enormously open for different interpretations …

Another one that must requires more and better explanation is rule SD0003:

 

It says "must conform the ISO 8601 international standard". It does not say which part of that standard … If one takes the rule literally, it allows e.g. P25D (25 days) as a value for e.g. LBDTC. This is probably not what is meant, probably what is meant is that it must comply with the ISO 8601 notation for dates, datetimes, partial-dates and so one, so e.g. "2025-03-16T09:22:33". But the rule doesn't say that …

Another questionable rule is SD0052:


Does this also apply to "unscheduled" visits? When I have two visits which both have VISIT = "Unscheduled Visit", must they have the same value for VISITNUM? I doubt so, as for unscheduled visits, they must have values like "3.1", "3.2", "4.1". Or should one take care that in such a case the value for "VISIT" is e.g. "Unscheduled Visit 3.1"? The rule does not say anything about this …
The SDTMIG-3.4 in section 4.4.5 states:

If one follows this, both the cases "left null" and "generic value" would violate PMDA-SD0052, at least when taking the rule literally.

Another one, but which is related to the use of SAS Transport 5 ("XPT" format) is rule SD1212:


This is problematic as the (still mandated) SAS-XPT format stores numbers in the "IBM mainframe notation" which is not compatible with modern computers (that use IEEE), and --STRESN is a number and --STRESC is character. So, what is the meaning of "equal"? Is it a violation when e.g. LBSTRESC is ".04" and the visual presentation (e.g. using the SAS Universal Viewer) of it is "0.04". I have seen lots of what in my opinion are "false positives" in the implementation of one vendor.
Time also PMDA moves to the modern CDISC Dataset-JSON format for submissions.

What I also found interesting is that in the Excel, "Message" comes before "Description", as if the rule is the rule is already pre-attributed to be implemented by a single vendor. It is also the question whether a vendor-neutral organization like the PMDA should impose on the vendors what the message is in a software implementation. If "Message" would be replaced by "Example message" and come after the "Rule description" in the Excel I would already feel better.

Let's now have a look at the Define-XML rules, an area where there have been a lot of inaccuracies in the rule in the past.

Just take the first ones from the Excel: 

 

 

The first rule DD0001 already astonishes me. The rule description "There is an XML schema validation issue …" is surely not a rule, it is an observation. The rule should sound something like: "Any define.xml file must validate without errors against the corresponding XML-schema". Also the sentence "it is most likely an issue with element declarations" does not belong in a rule description.

Rule DD0008 is essentially not necessary as when the element is not in the correct position, the define.xml will not validate without errors against the corresponding XML-Schema. Again, this is an observation, not a rule.
Also rule DD0002 is not necessary, as when one of the namespaces is missing, the define.xml will not validate against the schema. What is in the "description" is essentially not more than a recommendation, or "requirements lookup".

And what to think about the wording in rule DD0010 "… should generally not …". What does "generally" mean here? Does it mean that there are exceptions? If so, what are they?

What surprised me (well, not really) is the use of the wording "should". I found it 26 times in the Excel file. Essentially, the word "should" should never appear in a rule. "Should" is an expectation, and expectations do not belong in rule formulations.

I always compare it to saying to your teenage daughter or son: "you should be back home by midnight". Do you think he or she will then really be back at home at 12 pm on a Saturday night? If you say. "you must be back home by midnight", that already sounds more strict isn't it?

I did not check every rule in the Excel in detail. That would be too frustrating …

Unfortunately, such rule definitions open the door for different interpretations, leading to different results when using different implementations, of which some will surely be "false positives", or conflict each other. This is something we observed in the past with the SDTM rules of one vendor: When you did something "A", you got an error or warning "X", and when you did it "B", the warning/error X disappeared, but you got an error/warning Y. So. whatever you did, you always got an error or warning. Pretty frustrating isn't it?

Does this publication show improvement when compared with the older versions? I would say "very little".
The way the rules are published (as non-machine-executable Excel) and the way they have been written (some seem to be written as copy-pasted from leaflets in the "complaint box" at the entrance of the PMDA restaurent.

As CDISC-CORE will soon also start working on the PMDA rules, I presume CDISC will be in contact with PMDA to discuss every one rule, check it for necessity, reformulate it to be precise in cooperation with PMDA coworkers, and then publish them again, together with the open source implementation (which currently is in as well human-readable as machine-readable YAML/JSON code). The CDISC open implementation can then serve as the official "reference implementation".



 

 


 






 

 

Saturday, November 25, 2023

The Need for Speed: Why CDISC Dataset-JSON is so important.

The CDISC community has been suffering for 20 years or more by the obligation of FDA (and other regulatory authorities following FDA) to submit datasets using the SAS Transport 5 (XPT) format.
The disadvantages and limitations of XPT are well known: limitations to 8, 40 and 200 characters, only US-ASCII encoding only, etc.. But there is much more. Essentially, the use of XPT has essentially been a road-blocker for innovation at the regulatory authorities all these years.
Therefore, the CDISC Data Exchange Standards Team has developed a modern exchange format, Dataset-JSON, which, as the name states, is based on JSON, the currently worldwide must used exchange format anyway, especially for the use with APIs (Application Program Interfaces) and RESTful Web Services.
The new exchange format is currently being piloted by the FDA, in cooperation with PHUSE and CDISC.

Unlike XPT, Dataset-JSON is really vendor-neutral and much , much easier to implement in software than XPT. This has also resulted in a large number of applications being developed and showcased during the COSA Dataset-JSON Hackathon. There are however many opportunities created by the new format, which are however not well recognized by the regulatory authorities.
XPT is limited to the storage of "tables" in "files", i.e. two-dimensional. JSON however allows to represent data (and metadata) in many more dimension and deepness. This means that, even when Dataset-JSON will at first still be used to exchange "tables", these can be enhanced and extended to also carry audit trails (much wanted by the FDA), source data (e.g. from EHRs, lab transfers) and any type of additional information, as well on the level of the dataset, the record, as the individual data point.
Furthermore, Dataset-JSON will allow to embed images (e.g. X-Rays, EMRs) and digital data like ECGs into the submission data.

The major advantage of using this modern format is however on another level.

Traditionally, submissions to regulatory authorities are only done after database closure, mapping the data to SDTM, SEND and/or ADaM, etc.. This essentially means a period of often several months are the clinical study has been finalized, and years after the clinical study has been started. In the mean, many patients may have died or seriously harmed, as the treatment they need, is not available yet. This is what we call "the need for speed".

Dataset-JSON can be game changer here.

Essentially, partial submission datasets can be generated as soon as the first clinical data are received from the sites. The regulatory authorities are however not used to start reviewing as soon as the first clinical data is available, among others, due to their technical infrastructure.
JSON is especially used worldwide for use with APIs and RESTful web services, meaning that even submission data can be exchanged real time, once they are created. Although JSON can of course be used with and for "files", the real strength is in its uses for "services". All other industries have moved from files to SOA, "ServiceOriented Architecture".

What does this mean for regulatory submissions? 

Imagine a "regulatory neutral zone" (one can discuss what "neutral" means) between sponsor and regulatory agency, where the sponsor can submit submission records (not necessarily as "files") as soon as they are created, using an API and e.g. using RESTful Web Services. Using the same API, records can also be updated (or deleted) when necessary, using audit trails. On the other side, reviewers can query the study information from the repository, using the API, not necessarily by downloading "files" (although that remains possible), but by getting answers on questions or requests like "give me all subjects and records of subjects with a systolic blood pressure of more than 130 mmHg that have a BMI higher than 28".
This "regulatory neutral zone" is surely different from the current "Electronic Submissions Gateway" (which is completely file based), but more related to API-governed repositories used in many other (also regulated) industries such as aviation, financial, etc..

Essentially, when all this in place, regulatory submission could be started as soon as the first data points become available, and finalized much sooner (even months or years sooner) as is currently the case. This can then save the life of thousands of patients.


 

Monday, January 9, 2023

CDISC SDTM codetables, Define-XML ValueLists and Biomedical Concepts

Yesterday, I started an attempt to implement the "CDISC CodeTables" in software to allow even more automation when doing SDTM mapping using our well-known SDTM-ETL software.
As the name says it, CDISC has published these as tables, and so far only as Excel worksheets. Unfortunately, this information is not in the CDISC-Library yet, otherwise it would only have costed me a relative simple script to access the CDISC-Library API and a few hours to get all the information implemented as Define-XML "ValueLists".

Essentially, I do not really understand (others will probably say "he does not want to understand") why these codetables were not published as Define-XML ValueLists right from the start. Was it that the authors have limited or no Define-XML knowledge (there are CDISC trainings for that ...) or is it still the thinking that Define-XML is something that one produces after the SDTM datasets have been produced (often using some "black box" software of a specific vendor), rather than using Define-XML upfront (pre-SDTM-generation) as a "specification" for the SDTM datasets to be produced (the better practice). Or is it just still the attitude of using Excel for everything ...: "if all you have is Excel, everything is a table".
Now, I do not have anything against tables. I have been teaching relational databases at the university for many years, and these are indeed based on ... tables. The difference however is that in a relational database, the relations are explicit (using foreign keys), where in all the CDISC tables (including for SDTM, SEND and ADaM), the relations are mostly implicit, described in some PDF files.

When I start looking into the Excel files, I immediately had to say "OMG" ...

Each of the Excel files seems to have a somewhat different format, some with and other without empty columns, and completely different headers. So even when I wrote software to read out the content, I would still need to adapt the code (or use parameters) for each input file to have at least some chance of success. Although far from ideal, I then wrote such a little program, and could at least produce some raw XML CDISC CodeLists, although the results still require a lot of afterwork.

So I started with the DS (Disposition) codetable, which went pretty smooth.

Then I decided to tackle a more complicated one, the codetable for EG (ECG - Electrocardiogram).
I knew this would be a non-trivial one, as the EG domain itself is pretty weird. In contrast to normal CDISC practice, EGTESTCD and EGTEST have 2 codelists as can be seen in the CDISC-Library Browser

i.e. one for classic ECGs and one for Holter Monitoring tests.

Personally, I consider this very bad practice. The normal (good) practice is to have a single codelist, and then use Define-XML ValueLists with "subset" codelists for different use cases. This is a practice also followed by CDISC for other domains, e.g. by publishing a subset-codelist for units specifically for Vital Signs tests.

Also, when creating SDTM datasets, we define subset codelists all the time in our define.xml, e.g. based on the category (--CAT variable), but we also generate a subset codelist with only the tests that appear in our CRFs or were defined in the protocol. For example for LB (Laboratory) we will not submit all 2500+ terms for LBTESTCD and LBTEST, but only the ones we used or planned to use.

But maybe the authors of this part of the standard were unaware of define.xml, subset codelists, and especially Define-XML "ValueLists" and the nice possibility to work with "WhereClauses".

So, the codetable for EG, in Excel format, comes with two tabs: "EG_Codetable_Mapping" and "HE_Codetable_Mapping":

 

That the latter is for the "Holter Monitoring Case" is not immediately obvious: there is even no "README" tab explaining the use cases.

As usual (and unfortunately), there are different sets of columns for the different variables the subsets of codes apply to:


This makes it hard to automate anything to use it in software: either one needs to revamp the columns, or do a huge amount of copy-and-paste (as before the CDISC-Library days).

When comparing the contents of the tabs, things even get more complicated.
Some subset codelists appear in both tabs, others such as the ones for units (for EGSTRESU, depending on the value of EGTESTCD) only in the first. Does this means the units subsets are not applicable to the Holter Monitoring use case?

When then comparing the subsets for the value of EGSTRESC (depending on EGTESTCD) in both tabs, some are equal (e.g. for the case of EGTESTCD=AVCOND), others are different, with a range of only 1 term different, to a larger set of terms being different.

I tried to resolve all this by adapting my software - it didn't work well. So I started doing ... copy and paste ...

This results in subset codelists like:


with some codelists coming in two flavors, one for the normal case and one for the Holter Monitoring case - of course I gave these different OIDs.

For the units, the organization in the worksheet is pretty unfortunate, so e.g. leading to:


stating that for each of EGTESTCD being JTAG, JTCBAG, JTCBSB and JTCFAG the only allowed unit is "msec" (milliseconds) for EGSTRESU.
This is valid for use in Define-XML "ValueLists". The "WhereClause" would then e.g. say:
"Use codelist CL.117762.JTAG.UNIT" for EGSTRESU when EGTESTCD=JTAG".

The better way however is to define one codelist, e.g. "ECG_Interval" and define a WhereClause stating when it should be used for EGSTRESU. This leads to e.g. for the Define-XML ValueList and WhereClause:


with the subset item and codelist defined as:

 

and the ValueList of course assigned to EGSTRESU:

 

Essentially, this is all very related to Biomedical Concepts!
For example the concept "JTAG" (with name "JT Interval, Aggregate" ) would then have the property that it is an ECG test (and thus related to EGTESTCD/EGTEST in SDTM) with the property that the unit for it can only be "msec", at least when using "CDISC notation" for the unit. The better would however be to use the UCUMnotation, which is "ms" and which is used everywhere in health care except for at CDISC ..., and which has the advantage of allowing automated unit conversion, which is not possible with CDISC units.

CDISC has now published its first Biomedical Concepts in the CDISC-Library which can be queried using the Library RESTful API:


For example, for the BC "Aspartate Aminotransferase Measurement", the API response (in JSON) is:

 

As I understand it, CDISC is also working on generating BCs starting from codetables, especially for the oncology domains and codelists, where we have similar dependencies between standardized values (--STRESC) and possible units (--STRESU).

It would then be great if we can see all the by CDISC published codetables published as BCs, and made available by the CDISC-Library through the API. With the SDTM information than added, these then correspond to the ValueLists in the define.xml of our SDTM submission.

But I will start with converting these awful Excel codetables to Define-XML CodeLists and ValueLists (with the corresponding WhereClauses of course) first.

Essentially, it should be forbidden that CDISC publishes standards (and even drafts of them) as Excel files, but it should only be allowed that a real and standardized machine-readable form, such as based on XML or JSON, is used. This would finally allow much better QC for the draft standards (instead of visual inspection!) and make the standards immediately usable in systems and software.

I presume many of you will disagree, so your comments are always welcome!