Skip to main content

Are Your Raters Undermining Your Alzheimer’s Trials?

Assessing changes in cognition is central to an Alzheimer’s disease (AD) trial, but those changes are increasingly nuanced, and they can happen over years.

And as we all know, historically there’s been a high error rate, with mistakes in both administration and scoring of the cognitive scales. This leads to variance that can ultimately doom a trial, particularly disease-modifying trials where the expected treatment effect over shorter trials is expected to be relatively small.  In such cases, error variance has the potential to interfere with signal detection. 

More than ever before, raters need to be carefully trained. We used to teach raters what to do, test them, and then send them on their way to conduct assessments, with no ongoing training. That may have been fine for some short trials, but it will not work for extended CNS trials, especially Alzheimer’s trials. Unfortunately, too many sponsors and CROs still take that approach. Among the consequences: rater drift and inconsistency.

Improving standardization

In a multiyear trial, rater drift is a vital concern. And there’s the issue of consistency across raters. As new raters come in over the course of a long trial, they need to administer and score the same way as all the other raters. In international trials, the risk of variability is greatest, given that raters with widely diverse backgrounds are collecting the outcome measures.1

To ensure consistency, you must change the way you train.

At WCG MedAvante-ProPhase, we’ve largely shifted our approach to online modules that are live throughout the life of the study. We deliver flexible training based on decades of clinical experience.

In fact, we never stop training. Our eLearning Center, available online 24/7, provides opportunities for self-training and refreshers; this can be particularly useful during long studies.

Perhaps most important, our assessment platform itself is designed to support ongoing training.

Virgil guides the way

Virgil, our electronic clinical outcome assessment tool, includes built-in clinical guidance as well as consistency checks.

The tablet raters use for the assessments gives them access to the industry’s largest library of electronic rating scales and a tightly integrated web portal for real-time data analysis. It’s already being used in trials at more than 900 sites in 30 countries.

We’ve built in pop-up guides–similar to the alerts that pop up on an EHR. Let’s say a rater administering the Clinical Dementia Rating (CDR) enters a score for the Memory domain which is inconsistent with information provided by the informant during the course of the interview.  This will immediately trigger a popup alert that highlights how the score is inconsistent and allow the rater the opportunity to reconsider his or her score.

We also routinely build in central oversight via Independent Review in these long trials.  The Virgil system can be configured to automatically audio-record assessments, allowing our expert calibrated clinicians to review the assessments and provide feedback to site raters on administration and scoring of the assessments.  The algorithms for triggering an Independent Review can be based upon the rater (first assessments administered, or upon elapse of a given amount of time since last review), the visit (key visits like baseline and final visit), or via Clinical Data Analytics (CDA).  The latter trigger can be built upon statistically implausible score change from the prior assessment, internal data inconsistencies, or other data-based anomalies that are likely to be associated with scoring errors

Always learning

Virgil addresses the problem of subjectivity while keeping raters trained. It provides raters with real-time clinical guidance as they administer the assessment. As a result, it can cut site-rater errors by 50 to 85 percent across four common cognitive scales.2

The old way of working with raters simply won’t work for today’s studies. There’s no room for methodological imprecision, especially in AD trials. Time is running out.

References

  1. Grill, J.D. et al. Comparing recruitment, retention, and safety reporting among geographic regions in multinational Alzheimer’s disease clinical trials. Alzheimers Res. Ther. 7, 39 (2015).
  2. Negash S, Böhm P, Steele S, Sorantin P, Randolph C. Virgil Investigative Study Platform Minimizes Scoring Discrepancies to Improve Signal Detection. MedAvante Inc.; Loyola University Medical Center. Poster presentation, 14th Annual Athens/Springfield Symposium on Advances in Alzheimer Therapy, March 2016.

Schedule a consultation with our experts

There’s no time for doubt or delays. WCG’s clinical endpoint solutions demystify trial efficacy by reducing clinical trial error rate and, subsequently, the risk of inconclusive and unsalvageable studies. Complete the form to schedule complimentary consultation with our experts today. We’ll help assess your need, and discuss how WCG can assist.