Thursday, September 8, 2016

--LOBXFL: a follow up

Recently, my French colleague Thierry Lambert noticed "It is often not possible to derive EPOCH in a generic manner: at baseline, you assign "SCREENING" to observations done on the day of treatment start because you know from the protocol and the CRF they were done before the first drug intake. The dates being the same, you cannot assign EPOCH automatically in this kind of case."

The same essentially applies to --LOBXFL: If you only know the date of the observation (without the time), and the observation is on the same date as the first exposure/treatment, you cannot know whether the observation was before or after the first exposure. So, it would essentially be impossible to assign a value of --LOBXFL at all, unless you know that the protocol stated that the observations had to be made before the first exposure, and you trust that the investigator did exactly as is stated in the protocol. However ... trust is good, control is better ...

So, I decided to slightly change the algorithm for assigning baseline "last observation before exposure" records in the "Smart Dataset-XML Viewer". What I did is that when the last observation before exposure is clearly before first exposure, either because for both the time part is available, or the observation was on another day, being before the first exposure day, then all is safe, and the record is marked as "last observation before exposure".

If however, we do not have the time of the observation, and the date is the same as the first exposure date, this means that we cannot be 100% sure that the observation was before the first exposure. In that case, we mark the record using a different color, and provide a tooltip providing a warning.

Let us take an example. First we inspect the DM dataset:


For subject 1015, we see that the date of first exposure is "2014-01-02". This indeed corresponds exactly with the earliest exposure record for that subject in EX.

Let us now take some laboratory (LB) records:



We see that some "last observation before treatment" records have automatically been assigned. For both in this picture, the observation date(time) is clearly before the date of first observation, and in both cases the following record is clearly after the first exposure date. So this is safe.
Let us also have a look at the vital signs (VS) dataset:


Also here, for the last "HEIGHT" measurement before first treatment, the assignment is safe, as the measurement was done a week before first treatment. A bit further we do however find:



We notice that the pulse rate measurements were performed on the same date as first treatment, but as no time part is given, we do not know exactly when. We could suppose that they were done before the first exposure, but we cannot be 100% sure. Even when the first exposures were exactly registered (including a time part), we still could not be 100% sure, as the VSDTCs all have a missing time part. So in this case, we are not safe.
So the software treats this differently, another background color is assigned, an a warning tooltip is provided.

Can the FDA tools do this?

I am uploading the new executables as well as the source code to SourceForge later today. Please feel free to use the software, use the source code in any way you would like. It's Open Source!

I think my next blog entry might get the title "--LOBXFL can seriously damage your health" ...

Tuesday, August 30, 2016

Why --LOBXFL should not be in SDTM

In my previous post from last week, I argued that the new SDTM variable --LOBXFL (Last Observation Before Exposure Flag) should not be in SDTM, as it is a derived variable, and can easily be calculated "on the fly" by review tools.
I also promised to implement such a "on the fly derivation" in the Open Source review tool "Smart Dataset-XML Viewer". The latter already has features for "on the fly" calculation of other derived variables like EPOCH and --DY.

It took me about 6 hours to implement the new feature "highlight last observation before exposure" in the Viewer. When the user now selects "Options - Settings" and navigates to the "Smart features" tab, a new option becomes visible:


The new option is near the bottom "Derive and Highlight Last Observation records before first Exposure". Essentially, this corresponds to records where the future value of --LOBXFL is "Y", but here, derived "on the fly" instead of relying on the flag in the record.
Also remark the first two checkboxes, which allow "on the fly" derivation of first and last study treatment (based on EX) and displaying that as a tooltip on the DM record, essentially making RFXSTDTC (Date/Time of First Study Treatment) superfluous.

When checking the checkbox "Derive and Highlight Last Observation records before first exposure", a new dialog is displayed, asking the user to choose between two additional options:


It asks the user whether the derivation of the "last exposure before first treatment" records should either be based on trusting RFXSTDTC in DM, or that the tool will also retrieve the "first exposure" for each subject from EX. The best is of course to use the second option, as reviewers should essentually make their own judgements and not rely on derived information (which may be erroneous) submitted by the sponsor.
However, to demonstrate this, let us "trust" the submitted value of RFXSTDTC (Date/Time of First
Study Treatment).

After loading the SDTM submission datasets, let us have a quick look at the DM dataset. Here it is:


One sees that the first and last date/time of study treatment exposure are displayed as a tooltip on the "USUBJID" cell, making RFXSTDTC and RFENDTC superfluous (also remark that subject 1057 was a screen failure). For subject 1015, the date/time of first exposure is 2014-01-02, as derived from the EX records.

Let us now inspect the VS (vital signs) dataset. I moved some columns around (another of the many features of the viewer) to obtain a more "natural" order of the variables.


One sees that 3 records for DIABP (diastolic blood pressure) are highlighted. Their VSDTC (date/time of collection) is identical and equal to the first treatment date.
This already leads to a first discussion point about baseline flags, which is a discussion about data quality: if treatment and observation points are not precisely collected (i.e. including the time, not only the date), one cannot always know whether an observation was made before or after the first treatment. In this case, one only knows the observations were made ON the same day as the first treatment.
Also, we see that the sponsor assigned baseline flags (VSBLFL=Y) are correct.

Let us look somewhat further in the table:


We see that the last observation for "HEIGHT" before first study treatment is highlighted, and we see that 3 records for "PULSE" (Pulse Rate) are highlighted. We however also see that for the highlighted "HEIGHT" record, the sponsor did not set a baseline flag. It might have been forgotten, or it was decided that "HEIGHT" is irrelevant for the analysis of this study. A reviewer may judge differently.

For the second subject (1023), we find:


Something is strange here! The first three records for DIABP are marked as "last observation before first study treatment", but the baseline flags set by the sponsor are not on these records, but appear for the observations in the next visit.

What happened?
Did the sponsor assign the baseline flags incorrectly? Or did something else happen?
Another possibility is that RFXSTDTC was incorrectly derived by the sponsor (in DM), and as we decided to base the "on the fly derivation" on RFXSTDTC (which reviewers should i.m.o. not do), the "last observation records" are incorrectly assigned.

So let's not trust the submitted RFXSTDTC and let the tool derive it from the EX records:


And then inspect the generated table for subject 1023 again:





We now see that the highlighted records (derived "on the fly") now correspond to the records for which the sponsor set the baseline flag to "Y".

If we go back to the DM record for this subject, everything becomes clear:


We see that some way or another, the value for RFXSTDTC was not correctly assigned by the sponsor. It states "2012-08-02", whereas the real first exposure date/time (derived "on the fly" from EX and displayed on the tooltip) is "2012-08-05".

Conclusions

These results show again that:
  • derived variables should NOT be in SDTM, as they can easily be calculated or derived "on the fly" by review tools
  • derived variables mean data redundancy, which is always bad in data sets: if two values for the same data point differ in value, one can never know which one is incorrect
  • reviewers should NEVER, NEVER make decisions based on derived variables that were submitted by the sponsor, be it baseline flags, --DY values, or EPOCH values. They should use their tools for deriving them themselves directly from the real source data.
  • implementing such "on the fly derivations" in review tools is "piece of cake". It took me just 6 hours to implement the current one in the "Smart Dataset-XML viewer". Implementing other similar features even costed me less time.



I still need to clean up my source code a bit, and will then publish a new version of the software, including the source code, on the SourceForge project website. Once done, I will let you know through a comment.

As usual, your comments are very welcome.


Also read the follow-up post "--LOBXFL: a follow up"


Tuesday, August 23, 2016

SDTM derails: new derived variables



The "Study Data Tabulation Model v.1.5" has recently been published as part of the new SEND standard v.3.1. The SDTM Implementation Guide (SDTM-IG) describing how the SDTM model v.1.5 should be implemented in the case of human studies will probably be released for public review in the next weeks.

A quick view on the "Changes from v.1.4 to v.1.5" reveals that some new variables have been added to the model, including some "derived" ones, and some that essentially contain metadata.
However, SDTM, according to its own principles, should not contain derived data, and metadata should go into the define.xml, not into the datasets themselves.


The most obvious new variable is the --LOBXFL (Last Observation Before Exposure Flag) which can only have the value Y or null. It's definition is: "Operationally-derived indicator used to identify the last non-missing value prior to RFXSTDTC" (the latter is the datetime of first study drug/treatment exposure).
This variable is clearly "derived" and should not be in SDTM. So why is it there?
The answer is found in the latest version of the FDA "Study Data Technical Conformance Guide v.3.1" (Juli 2016) stating: "Baseline flags (e.g., last non-missing value prior to first dose) for Laboratory results, Vital Signs, ECG, Pharmacokinetic Concentrations, and Microbiology results. Currently, for SDTM, baseline flags should be submitted if the data were collected or can be derived".
The SDTM development team seems to have taken the occasion to make this a new variable, with the possibility to phase out the --BLFL variable which was not well defined. 


In my opinion, derived variables (such as EPOCH, --DY, etc.) should be calculated by the review tools at the FDA, and not be submitted by sponsors. The reason for this is that such variables jeopardize the model (data redundancy) and lead to errors. For example, I have seen submissions where up to 40% of the --DY values were incorrect! I expect that the same will happen for –LOBXFL in future submissions. This may be highly problematic as reviewers will rely on data that is possibly erroneous due to derivation problems, instead of relying on their own "on-the-fly" derivation (trust is good, control is better).

For example, suppose I am testing a new blood pressure lowering agent, and have following values: 140/95, 120/80 and 122/82, and erroneously, the second one is assigned by the sponsor as "last non-missing value prior to dose" (VSLOBXFL=Y) instead of the first one. Can you imagine what can happen?

I haven't tried yet, but I guess that I can add a feature to the "Smart Dataset-XML Viewer" that highlights the records that contain the last value before exposure by finding it "on the fly". As on other occasions, I think I can program that in maybe 1-2 evenings (see here) ). Now I am not a super-programmer, so I wonder why the FDA (with much more resources than I have) were not able to realize such simple features in their tools in the last 20 years. 


Also following variables have been added: --ORREF (Reference Result in Original Units), --STREFC (Reference Result in Standard Format), --STREFN (Numeric Reference Result in Std Units).
I presume the "origin" in these cases can be "assigned" (but than it is metadata which i.m.o. belongs into the define.xml), or "derived". The document gives the following example: "value from predicted normal value in spirometry tests".
Now I worked some time in this area, and know that such values are usually derived from age and sex of the subject (see e.g. https://vitalograph.co.uk/resources/ers-normal-values), or sometimes using a few more variables (additionally, height, weight, … - see e.g. http://dynamicmt.com/dataform3.html). In such a case, it would be better if the reviewer can generate these reference values himself (so not trust that the sponsor has provided the correct value), e.g. by using a RESTful web service. We did already develop such a RESTful webservice for LOINC codes, and implemented it in the "Smart Dataset-XMLViewer", and I guess it would also be very simple to generate similar RESTful web services for normal values in spirometry.

In case such a reference value is independent from the subject itself (e.g. a fixed value for the specific test), I think it is to be considered as metadata, and should go into the define.xml. I realize that the define.xml needs to be extended for that, based on the "ReferenceData" element in the core ODM.

I will try to add the new feature "highlight last observation before exposure" in the "Smart Dataset-XML Viewer" next week (first taking a few days of vacation…)








Sunday, May 8, 2016

def:WhereClause in XQuery

Today, I worked on the PMDA rule SDTM-CT2005: "Variable should be populated with terms from its CDISC controlled terminology codelist, when its value level condition is met"

This is a rule about valuelevel data and associated codelist, for example that when VSTEST=FRMSIZE, VSORRES must be one of "SMALL", "MEDIUM" and "LARGE".
Or "VSORRESU must be "cm" when VSTESTCD=HEIGHT and DM-COUNTRY=CAN, MEX, GER, or FRA and must be "inch" when VSTESTCD=HEIGHT and DM-COUNTRY=USA.

The latter is of course special as it goes over the boundaries of domain, and thus files. When you  however have all your submissions in a native XML database (as I recommended to the FDA, but no reaction at all sofar...) this rule shouldn't be too hard to implement.

We are currently implementing all validation rules of CDISC, the FDA and PMDA in XQuery, the open, vendor-neutral, W3C standard for quering XML documents and databases, and thus also Dataset-XML submissions.

The challenging in this rule is that one needs to translate the contents of def:WhereClauseDef elements in the define.xml, like:


with this "WhereClauseDef" referenced from a def:ValueList:


applicable to the variable SCORRES as defined by;


So, how do we translate the "def:WhereClauseDef" into an XQuery statement? Of course the XQuery script can read the "def:WhereClauseDef" and the "RangeCheck" element in it, but it requires a "where" FLWOR expression like:

where $itemgroupdata/odm:ItemData[@ItemOID="IT.SC.SCTESTCD" and @Value="MARISTAT"]

So I wrote an XQuery function that accepts a def:ValueListDef node and returns a string which essentially is an XPath. Here it is:


The function is not perfect yet, it works well for the simple case that there is only one "RangeCheck" within the "def:ValueListDef" and the "comparator is "EQ" or "NE" and the check value is a string (the most common case). It doesn't work yet for more complicated cases - but I am working on it...

The function returns a string, which is essentially XQuery source code, but even XQuery needs executable code. Fortunately, there is the "util:eval" function (xdmp:eval in xquery:eval in BaseX) which takes a string which is XQuery code itself as an argument and evaluates it. In our validation script this looks like:


What this code snippet does, is that it iterates over all "ItemRef" child elements of a def:ValueListDef element, picks up the corresponding "def:WhereClauseDef" element, which is translated into an XQuery snippet and evaluated on the current "SCORRES". If the XQuery returns an answer (in this case an "ItemData" element), this means that the condition is applicable to the current record.

In the next step, it is then checked whether there is a codelist for SCORRES at the value level. For example if the SCTESTCD=MARISTAT, then the ItemDef "IT.SC.SCTESTCD" is applicable for which there is an associated CodeList "CL.MARISTAT" with allowed values "DIVORSED", "DOMESTIC PARTNER", "LEGALLY SEPARATED", "MARRIED", "NEVER MARRIED" and "WIDOWED".
If the actual value of the current data point is not in the valuelevel codelist (and there is such a codelist), an error is produced:


The complete code for this rule PMDA-CT2005 can be found at:

http://www.xml4pharma.com/download/CT2005_WhereClause.xq

I will of course further refine this XQuery function, especially for multiple RangeCheck child elements and for the "IN" and "NOTIN" comparators. When finished, I will again publish that code.

If you would like to contribute to the development of validation rules in the vendor-neutral XQuery language, just please let me know.