In this blog post, Arturo J. Morales, PhD, chief technology and data officer for WCG Analgesic Solutions, discusses how central surveillance using statistical process control (SPC) and other analytical techniques-- combined with expert human analysis--maximizes assay and endpoint sensitivity, allowing sponsors to meet scientific and regulatory expectations for quality data in clinical trials.
It’s not just pain. Clinical trials in pain are notoriously challenging and, too often, the problem isn’t the therapeutic being tested. Your drug may be efficacious, but your data didn’t prove it.
The problem is the data, and assay sensitivity--the ability to separate drug from placebo--is a frequent culprit. But this problem isn’t limited to pain or even CNS trials. Anywhere you have a subjective outcome you’re trying to make into an objective measure, you’ll encounter data problems. You may be able to identify them, but can you solve them?
Beyond Sesame Street
Identifying aberrations in data is not the challenge. Sesame Street has been teaching us to be data scientists for 50 years: It’s very easy to tell that one thing is not like the others. But merely looking at data and saying, “This one looks different,” describes the way people have been approaching monitoring in clinical trials--until now.
Obviously, in a large clinical trial, simply being able to identify aberrations isn’t enough. Consider the sheer quantity of variables; only a handful are likely to have a direct impact on the outcome. Focusing on too large a set of variables will lead to many signals that are clinically irrelevant.
We aren’t looking for every questionable blip on the radar. The key is that we know which blips are likely significant. For example:
- In a patient: Too high or too low score variability in symptom reporting or discordance between caregiver and clinician scores
- At a particular site: Multiple subjects with high anxiety scores
- Across the entire study: A change over time in disability scores or number of adverse events
Humans select those metrics, focusing on things that have a reason to be looked at, not just a bunch of variables that may or may not be relevant. After all, variation itself isn’t a bad thing. Aberrations may signal nothing.
The data collected is then analyzed by clinical and data experts who come up with recommendations on what to do based on clinical operations, disease knowledge and regulatory expertise. For example, at some sites--due to an apparent decrease in data quality in patient-reported outcomes-- staff and patients may need more training in accurate symptom reporting. This training will refresh them on key concepts shown to have an impact on data quality and thus can potentially improve the outcome of the study.
The resulting data and analyses are provided in easy-to-use reports and interactive visualizations to sponsors and CROs. The interactive dashboard also allows users to drill up or down the trial data at the subject, site and study levels, giving them the ability to explore near real-time data in a simple and intuitive way.
Human intelligence + AI = QDSS
No machine, no matter how learned or intelligent, can handle that task. Our Quantitative Data Surveillance System (QDSS) combines SPC and other methods with interpretation and analyses by a team of clinical experts.
We think of it as human-enhanced AI. We're building our systems to help people make decisions. You need human knowledge to make sense of the data. But you also need the intelligent systems: If you try to tackle this without the systems to help you, you are quickly overwhelmed by the number of signals, the amount of work, the quantity of data and the challenge of consistently applying interventions to avoid introducing bias in the study.
An early-warning system
Sponsors and CROs can look under the hood while the trial is running and blinded and identify threats to the outcome of the study. That’s a monumental change. In the past, you set up your trial, and you hoped you designed your protocols perfectly and the sites executed them accurately. Then you closed your eyes and hoped for the best for two years.
The field’s changing. Our goal is to say, “Well, if we know there’s a problem or we know something smells fishy, can we look at it, can we identify it? Can we systematically and consistently recommend the mitigation strategies and actions to correct it, and can we help you to implement those interventions?”
We can. We’ve had tremendous success with our predictive, analytics-driven Quantitative Data Surveillance System (QDSS) for clinical trial monitoring.
Throughout the course of the trial, QDSS monitors selected metrics to identify statistically significant aberrant signals in near real-time, determine their clinical relevance and intervene before they become irreversible and adversely affect outcomes.
That latter part is critical, and it requires human expertise.
And beyond that, what we learn can make future trials better: We can also perform a retrospective analysis of completed trials in QDSS as a Completed Study Investigation (QDSS CSI). The learnings we derive from past trials can then increase sensitivity earlier and optimize the variables we look at and how we look at them for clinical relevance.
Finally, it’s important to consider the regulatory aspect. QDSS is a step forward in realizing the FDA’s Risk-Based Monitoring Guidance as it helps improve the quality of data and reduce human-induced bias in clinical trials.
Any trial with subjective measures needs a way to identify--and mitigate--data-quality risks that matter. QDSS does just that.
To learn more about maximizing assay sensitivity and endpoint outcomes, contact Sam Dranoff at firstname.lastname@example.org.
About the AuthorFollow on Linkedin More Content by Arturo Morales, PhD