This webcast on “How Optimized Training Can Lead to a Positive Clinical Trial” is moderated by Lauren Ozmore, WCG’s Senior Marketing Manager, who will be joined by Dr. Nathaniel Katz, our presenter and the founder and Chief Science Officer of WCG's Analgesic Solutions.
Dr. Katz is considered one of the leading experts of treatment and clinical study design in pain clinical trials. He is a neurologist and pain management specialist with a distinguished career at Harvard Medical School, Brigham Women's Hospital, and Dana Farber Cancer Institute.
Dr. Katz founded WCG's Analgesic Solutions with the mission of modernizing the design and conduct of pain clinical trials to advance the scientific quality of pain clinical research and empower effective treatment for patients. He also holds the position of Adjunct Associate Professor of Anesthesia at Tufts School of Medicine.
Dr. Katz: Thank you so much for joining. Training has not really been viewed as a scientific topic in the world of clinical trials. I think that's a key problem. Training has been viewed as kind of a checkbox activity or something to do to please regulators, but not something that has a direct impact on our ability to achieve our scientific objectives in clinical trials. That's the way of thinking that I'm going to try to overturn today.
Dr. Katz: There will be a few quiz questions that I'll ask you. This is just for you and yourself, nobody's going to be capturing your information on these, but they'll just help me drive some points home. Here's the first question on the slide right now. This is should be an easy one for all of you. You probably wouldn't be on the call if you didn't already know the answer to this question. I want you to go ahead and answer it for yourselves anyway. The way a study is conducted influences the observed effective size of treatment. Is that a yes or is that a no?
Dr. Katz: The reason I ask this question is because people have had magical thinking about the way that clinical trials generate data. There's this belief, this strange belief, that if you just give some people the treatment and give some of the people the placebo then somehow through a type of alchemy the trial will generate an observed effect size that will somehow accurately characterize the pharmacology of the treatment effect without having to worry about what happens in between.
Dr. Katz: Of course, the answer to that is yes. It doesn't happen by magic the way that the study is conducted actually does influence the effect size of treatment. It's not just that the drug has a certain effect and if you give it to people the effect will emerge out of your trial. The trial has to be conducted in a rigorous manner in order for an accurate estimate of the treatment effect to be generated. What is that? What is going on under the hood that has to be done in a particular way in order for the results to accurately reflect the treatment effect?
Dr. Katz: The first, I think, key insight is a clinical trial is not magic. It's not alchemy. It's a measurement system. Now, for those of you who are not aware, there are a lot of scientists in the world interested in measurement, and they're very interested in the terminology that is used to describe measurement. There's a Joint Commission for guides in Metrology that consists of representatives of seven different scientific organizations across biology, and physics, and engineering, and all sorts of other areas. You can guess which field of science is not represented there. It's, of course, the field of clinical research. Yet, there's a lot of good information that directly relates to the work that we do.
Dr. Katz: One of the terms in the guidance document produced by the Joint Commission of Metrology is this one, which is a measurement system is a set of one or more measuring instruments and often other devices assembled to give information used to generate measured quantity values. Measured quantity value would be the patient's pain score, for example, or their blood pressure or their depression score or their cholesterol. Those are the measured quantity values, and the clinical trial is a measurement system. Unless we get clear in our minds that all of the different gizmos and gadgets in the clinical trial have to be calibrated in order for the clinical trial as a whole to generate a clear estimate of the treatment effect then we can't do proper clinical research.
Dr. Katz: Now, the problem that we have when we're doing clinical trials on human beings, which is how we're always doing them, is that the human is one of the key measurement instruments. I can't measure the pain-relieving properties of an analgesic unless I ask somebody how much pain they have. The human is the measurement instrument and that person has to be calibrated along with everything else.
Dr. Katz: The question then for us is what are those components and how do you calibrate them, how do you calibrate a human being? Let's go on to the next slide and we'll learn more about that. When I use the words study participants in this presentation I don't mean just the study subjects, although I know that the word participants is increasingly used for that, I mean everybody involved in the clinical trial. Everybody is doing something that impacts the primary endpoint. I'm calling them a study participant.
Dr. Katz: That can be the patient themselves. That could be the study coordinator. That could be the investigator, etc. I've already asserted to you that the performance of the individuals involved in the clinical trial impacts the observed effect size of therapy. Well, what's actually the evidence for such a bold assertion? There's actually lots of evidence. I just pulled together six studies on this slide that are in the literature that support this point. That the performance of people in the trial impacts the observed effect size of therapy. I pulled out six because I could fit six on a slide, but I assure you that there are more studies out there that make the same point. I'm just going to very, very quickly highlight some of these findings. Not because these individual findings are something that you need to memorize, but because they get across the point, if it wasn't clear already, that how people do their jobs in a clinical trial impacts the results.
Dr. Katz: Top left: this should be a no brainer to everyone, on the slide it's about adherence. If people don't take the drug it's not going to work. Everybody knows that, but yet it's kind of scandalous how little we do to measure the extent to which people are even taking study drug in clinical trials. This is one example from a phase three clinical trial. This is DOV Pharmaceutical's trial of bicifadine. This drug's clinical pivotal trials failed. That's why this drug is not on the market.
Dr. Katz: When they went back retrospectively and said, "Gee, I wonder if the people who took the drug might've done better than the people who didn't take the drug", they found that the people on placebo on the first blue bar had a certain treatment effect on their visual analog score. The people who were noncompliant in the middle bar had exactly the same treatment event as the people who were on placebo. No surprise there. The people who were compliant, which in this case was measured by drug levels in the patient's plasma, they actually had a statistically significant benefit over placebo. If the study had just been done on compliant patients or presumably if patients were compliant then this would've been a positive trial and maybe this drug would be on the market now. Seems obvious, but in the real world it's not.
Dr. Katz: The one in the middle top panel, which is entitled "Accurate Reporting". Again, it's not really rocket science to imagine that patients who report their pain more accurately will have a demonstrable treatment effect compared to people that will not. It took a while to actually demonstrate this. This is the result of a randomized control trial that is in the literature. What we did in this trial is we randomized patients with painful diabetic neuropathy to either get trained on how to report their pain accurately, which is the panel on the left, or not get trained to report the pain accurately, which is the bars on the right.
Dr. Katz: If you look at the left side on the trained group—the second blue bar in, which is "change in placebo", they had a much smaller response to placebo than the untrained patients. If you look in the grayish bars on the right and you look at the second bar in, going from left to right, you can see that bar, which represents their change in placebo, is much higher than the people in the trained group. Without belaboring the bars, the trained patients had lower variability and a lower placebo response, and a larger net treatment effect than the patients who are not trained. Training patients to report their pain more accurately, you see treatment effects that you don't see otherwise.
Dr. Katz: The one on the bottom left is compliance with electronic diaries. If you're doing a trial where you're measuring your treatment effect with electronic diaries, guess what? If the patients are not compliant with those diaries, you're not going to see a treatment effect. This is data from an unpublished clinical trial in osteoarthritis of the knee using an investigational drug that has an overall positive treatment effect. Now, if you divide people into quartiles based on how compliant they are or their e-diary is, where the people on the left are the most compliant and the people on the right and the 3rd and 4th quartiles are the least compliant, the green line is the drug effect and the blue line is the placebo effect.
Dr. Katz: You can see that in the compliant quartiles there is error that you can see between the drug effect and the placebo effect. Whereas if you look at the patients who are noncompliant with their e-diaries in the lower quartiles on the right you can see to the naked eye that there's no difference between drug and placebo. This has actually never been reported in the literature, a study on this, as far as I know, but it kind of reinforces what you know from common sense. We put a lot of effort into creating these fancy systems for people to report their symptoms on a daily basis, but we put very little effort into making sure that they comply with those systems.
Dr. Katz: Anyway, I'm going to leave the others to your imagination, but just to leave you with the overall point that if you don't get people to do their jobs in clinical trials you don't see differences between drug and placebo where their treatment differences actually exist.
Dr. Katz: If you're ready for another question, here it comes: If the reliability of an assessment (and by assessment I mean what's your pain intensity, what's your depression score, do you have osteoarthritis based on the ACR criteria, etc.)…if the reliability of your clinical outcome assessment goes from 0.7, which is kind of common, to 0.4 the sample size that you need to detect a difference between drug and placebo goes up by, A) 10%, B) 30%, C) 70%, D) it doesn't go up it actually does down. Pick an answer. I'm going to give you a few seconds.
Dr. Katz: There you go. The answer is actually C. If the reliability of your assessment drops from 0.7 to 0.4, you need 70% more patients in your clinical trial to detect the difference between drug and placebo. What is the reliability of your primary endpoint, the clinical assessment that's composed of your primary endpoint? Do you actually know what it is? If you even could give me an answer what's that answer based on? If it's based on the reliability that's in the literature, well guess what? The reliability of assessments in actual clinical trials is usually not as good as what's in the literature because what's in the literature is only the original validation study on that assessment or maybe it's from a positive clinical trial and that doesn't reflect what you generally see in garden variety clinical trials.
Dr. Katz: Even if your assessment at the beginning of your assessment if your reliability was 0.7, how do you know that it sustained at 0.7? Because if it's dropping, you're underpowered. This is something that we don't talk about or think about much in clinical trials.
Dr. Katz: If you're interested in this math it's been done for you. There's this great paper by John Lachin in Clinical Trials 2004, where he has a very nice table where he does the math and he looks at the reliability of your outcome assessment, whatever that outcome assessment is, which is in column two of this table. It starts at perfect reliability of 1.0, which we never seeing, going all the way down to horrendous reliability of 0.1. If you assume that your assessment, just for the sake of argument, has a reliability of 1 and you have 80% power and, therefore, you don't need anymore patients in your clinical trial, this is your nominal power calculation, he shows you how many more patients you need based on various drops of reliability.
Dr. Katz: I picked from his table 0.7 going to 0.4. That was in the question on the previous slide, and that means you need ... if you needed 100 patients now you need 170. You can pick where you want on this table and do the math yourself, but the key point is reliability of assessments depends upon the job performance of the people doing the assessment. If it's a patient doing the assessment by filling out a patient-reported outcome measure you're entirely at the mercy of how reliably that patient performs that assessment.
Dr. Katz: If it's an investigator doing an assessment, like for Ham D or something like that, then you're entirely at the mercy of that operator who's doing that assessment. Unless you take measures to establish a performance target and sustain it then you’re out of luck if reliability isn't what you think it is.
Dr. Katz: This next slide is an extract from what we call a data quality risk assessment. What's that? What we do these days is if we write a protocol or if somebody else writes a protocol for a study, we'll take a look at that protocol, which might be beautiful from a science perspective and from a regulatory perspective. And it might represent the apex of scientific and pharmacologic knowledge about your drug and how to do clinical trials, but then we allow reality to get introduced and we ask the questions at the bottom of the slide. Who are you asking to do things in this clinical trial? Are you asking the patient to do things? Are you asking a study coordinator to do things, an investigator?
Dr. Katz: What are you asking them to do that could impact the measured treatment effect in your clinical trial? For example, in the top row, if you're asking your investigator to diagnose osteoarthritis by the ACR criteria, then that impacts your primary endpoint if you believe that your drug works on osteoarthritis and doesn't work on other things that are not osteoarthritis. Similarly, if you're asking a patient to report their pain intensity, well that's going to impact your treatment effect, which is the fourth row down, and if there's excessive variability, well, you have a problem.
Dr. Katz: Then next question we ask is, okay, if we're asking people to do things that impact the primary endpoint, where is their performance likely to vary? If everyone's going to do things the same way every time consistently and that meets your performance standards you don't need training. You're fine. If performance might vary like, for example, packaging up blood and putting it in a box, if you think that people can do that in a reliable way and that's not going to vary much from coordinator to coordinator, and site to site, then you may need very little or no training or remediation on that task. If you think that, for example, reliability in reporting your primary endpoint, whether it's depression or urinary frequency or sleep quality, whatever, if you think that's going to vary from patient to patient and it's going to impact your primary endpoint, well, then you need to take a look at how you're going to sustain performance on those areas.
Dr. Katz: Then once you've identified those areas of performance that matter, then you can ask yourself the final question on this slide, which is how could you prevent performance problems? How can you monitor performance problems during your clinical trial?
Dr. Katz: How can you monitor performance problems during your clinical trial, and how can you remediate them should you see their performances dipping below your performance threshold? And that's what's captured in this data quality risk assessment.
Dr. Katz: Now some of you who are familiar with risk-based monitoring might be saying to yourself, "Oh, this kind of looks like what some people call a racked assessment,” which has been promulgated by a group called Transcelerate. And it does look like that, except that it's focused on the evidence for what variables impact the assay sensitivity or the observed treatment effect in clinical trials as opposed to just focusing on operational stuff in general, patient recruitment and stuff like that, which everybody kind of knows how to do.
Dr. Katz: So if you want to ask yourself, “where should I be thinking about training my clinical trial”, it starts in this kind of risk assessment.
Dr. Katz: Next question for you. So you've got some ideas like how to figure out which tasks, which performance tasks are important and which aren't. It's the ones that can impact your primary endpoint and where performance might vary. So if a task is important and you figure that out, and performance might be problematic, what are your options for preventing performance problems? Task simplification, changing a system, providing a job aid, or all of the above? You'll notice that I've not included training in this question. So I'll give you a few seconds to come up with your answer.
Dr. Katz: Of course, the answer is all of the above. Training is not the cure for everything. If you look at the regulatory guidances from ICH or EMA or FDA on risk-based monitoring, there's a lot of talk about detecting signals of poor performance, which is great, and then there's very little talk about what to do about those signals of poor performance. The one thing that you do see mentioned is training. Well, training is not the cure for all problems. Sometimes if there is a performance problem that you can anticipate, and you'd like to design out of your clinical trial before it happens or you observe a performance problem, there actually may be simpler and more effective things to do about that than training.
Dr. Katz: And here is just a more elaborate form of the response options on that question. So task simplification, maybe you're asking people to do too much. Maybe you don't need all 90 outcome measures in your clinical trial of which one is primary, five are secondary, and eighty-five are exploratory, and I realize that that math didn't add up. Maybe you could do with six outcome measures and then instead of for the patient being in the clinic all day where they can barely think straight after their fourth outcome measure, maybe you can keep them in the clinic for 15 minutes and just get the important things done. Or maybe your study coordinator's doing too many things. Maybe they shouldn't have twelve different electronic systems that they have to sign into but maybe just one or two.
Dr. Katz: So task simplification is really the first thing to think about.
Dr. Katz: How about task elimination? Well, that's bold. Do I really need all those EKGs? Do I really need all those blood tests? Do I really need all those outcome measures? How about system changes? Here's a good example. eDiary compliance. We have a wish that patients are compliant with their eDiaries and we usually, in clinical trials, have this vague notion that if people are noncompliant, then we'll think about asking the study coordinators to call them at some point. But we really don't have any idea how to communicate that to them or who should be doing it or when those calls should go out or how we even know if the study coordinator did that. That's a system problem that exists in 99.9% of the clinical trials that I've been involved with that have eDiaries. So maybe we need to change the system and simplify how we're going to accomplish that.
Dr. Katz: Another one is task delegation. Maybe this trial is so complicated that the study coordinator shouldn't be doing data entry. Maybe somebody else should be doing that. Maybe we should provide a data entry person. So you get the idea. A job aid. So if we're asking people to do a bedside assessment of sensory testing in order to phenotype patients for clinical trials, which we do a fair amount of, maybe a simple laminated card that the person could stick on the machine or on the kit instead of forcing them to fumble through a 50 page user’s manual, maybe that would be helpful.
Dr. Katz: Reminders. There's a lot of confusion between what's a reminder and what's a retraining. So a patient doesn't fill out their eDiary by 10 PM, which is when they're supposed to. Maybe they should get an automated text reminder that says, "Hey, if you don't enter in your pain scores, we have no idea if this drug works. We really need you to do that." Instead of a complicated retraining of the importance of eDiary compliance during the next clinic visit when a simple reminder might do.
Dr. Katz: Constraining behavior is a very important concept. One of the major advances in the science of human error actually came out of the Department of Energy's determinations of how to investigate things like chemical plant explosions back in the 1970s. And then major insight that we could learn a lot from in clinical trials is that human error is what people do when they can. Human error is what people do when they can. So when they did root cause analysis of these chemical plant explosions, the key insight was don't stop with human error. Oh, Joe Schmo was supposed to turn a valve off and he didn't and therefore he made an error, and we're done with our root cause analysis. No. The question is how could the system be set up so that Joe Schmo could have not turned that valve when he was supposed to.
Dr. Katz: So constraining the behavior. A simple example is if the study coordinator can't actually enter data into the EDC system until they've gone through their training because there's a linkage between the completion of their training and an online system and the opening up of their access and the granting of their passwords to the username and passwords of eDiary, you've constrained behavior so that people can't actually enter data until they've done their training. So look for opportunities to constrain behavior. So human error, which will happen if it can, doesn't.
Dr. Katz: Finally, we get to training, which is laborious and time-intensive and people don't like it and it's expensive. And so, if there are simpler things that we can do, such as training, so that we can achieve our behavioral performance targets without having to rely on training, then we should do those things, and this all happens of course long before the first site is recruiting their first patient.
Dr. Katz: So now finally, I'm halfway through my talk, and I'm going to first start to talk about training now because training is something you do when it's the thing that it ought to be done.
Dr. Katz: So training is a big deal. Organizations in the US alone, there's a very nice study looking at the economics of this showing that companies spend $135 billion per year on training in the US. And people are not going to spend money unless there's going to be a return on investment, a demonstrable return on investment for spending that money on that training. The organization has to get money back for spending money on this training.
Dr. Katz: Who spends all this money? Well, it's everybody, but the irony that I'd like to point out is that the companies that you all work for, no matter whether it's an academic organization or a pharmaceutical company or a CRO or wherever you guys are, you're spending this money, your organizations are spending this money on training. All of you guys are presumably executing clinical trials, or otherwise, you wouldn't be on this call. You'd have something better to do.
Dr. Katz: And your organizations are all spending money on training. Where? You're not spending money, generally speaking, on training the people who are performing the tasks in your clinical trials. You're spending it on SOP training. What's the SOP for exiting your building when there's a fire alarm that's pulled or what's the SOP for revising a document or for IT quality control? Or HR or sexual harassment. And these things are all important, and there should be training, of course. Don't misunderstand me. The irony that I like to point out, though, is that your organizations often have as their core activity performing successful clinical trials and you're spending all this money on training, but you're generally not training the participants in your clinical trials how to do the jobs that are so important for you to have a positive outcome. And that strikes me as ironic.
Dr. Katz: So the extent that you are investing in training, you should know what the scientific literature says about training and the broad conclusion from that literature, and there are lots of studies that have been done on training across all industries, from the military to arms manufacturers to HR, to postal service. There's data on training across industries. And those studies come up with two broad conclusions. Number one, training does improve performance. It does. And the second is that it does, depending on how you do it.
Dr. Katz: On the right side of the slide you see well, what are the factors that determine whether training is going to achieve its goals. And all of the research comes down to essentially two broad conclusions. That in order for training to actually improve performance, it must incorporate two components. Number one is practice of the skill that you're trying to get people to perform, and number two is feedback when they perform that task. Practice and feedback.
Dr. Katz: Now you all, presumably, have been involved in a lot of training in clinical trials, most of which happens at investigator's meetings. How much of an investigator's meeting is taken up by communication about the skill that you're trying to get people to perform, practice on that skill, and feedback on that performance? Anyone else feel like there's a disconnect there? I can tell you, I've been to a zillion clinical trial investigator's meetings over the last 25 years, and except sometimes for medical devices companies where you have to insert a device in a certain way or injection studies where you have to do an injection a certain way (there I have seen some practice and feedback), but aside from those examples, it's rare that the skill that you're asking people to do has been defined and that there's practice and feedback. Yet, that's what's necessary based on the literature if you want people to do what you want them to do.
Dr. Katz: Alright. Next question for you guys. What is the difference between training and education? This is very important. A, education is about what you know and training is about what you do. B, education does not generally change behavior but training does. C, A and B are true. And D, neither A nor B are true. I'll give you a few seconds of silence.
Dr. Katz: A and B are true. Education is generally taken to mean what do you know and training is about what do you do. Education is the first two years of medical school where you read books all the time. Congratulations, now you know a lot, but you have no idea how to do anything. Next two years are about actually getting out there and doing things with patients, and that's what makes you a doctor is learning how to do things. We know from 30 years of research on continuing the medical education, just to give one example, which is almost always lectures, like what I'm doing with you now, reading brochures and monographs and things like that. It never changes behavior, whereas things called training, you actually get them to do things and provide incentives for them to do them. If you're going to change what people do, that's what you have to do.
Dr. Katz: Which reinforces the irony, which is that investigator's meetings generally are education, yet, somehow, we expect behavior to change. It doesn't.
Dr. Katz: So, if we're going to have a conversation about training, which we're 34 minutes into right now, maybe it would be good to define what we mean by training. When people talk about things, and they don't define what they mean, I usually start checking my email because that means that the person doesn’t know what they're talking about. So I would offer you the following definition of training, which is an activity engaged in by a learner or a trainee with the intent of optimizing real world performance and generally incorporating education, practice, and feedback.
Dr. Katz: Now what did I just do? I reintroduced this word education in the definition of training, when I just tried to dissuade you that education is really even useful when trying to change behavior. So, there's a subtle, but important, nuance there, which is that if you're trying to get people to do things and use some kind of training program to do that, education is helpful because it helps you set the context. Let's talk a little bit about what it is we're trying to accomplish here. What do we know about this skill we're trying to ask you to do? What are you going to experience when you try to bring this back to the real world?
Dr. Katz: So some education at the beginning to transfer some knowledge, that actually sweetens the appetite of the learner for the practice and feedback that is the pointy end of the spear as far as training. So education is helpful as a component of training. But don't let that confuse you that somehow the education is the training. It's not. If you are confused about that and you deliver education and you believe that it's training, you're not going to see the performance standards achieved that you are actually looking for and that will lead to the return on investment that your organization allots. They're going to spend somebody's money on this training.
Dr. Katz: So, the question that some people have in their minds when I talk about things like this is what do you mean? We already do training in clinical trials. We have these super expensive and fancy investigator's meetings in ritzy towns. We already do training. What are you talking about?
Dr. Katz: So, here's why what we currently do is not training. We call it training, but it isn't. Here are 10 reasons why it isn't. First of all, we often rely on passive transfer of information, like PowerPoint slides and getting people to read things, and as I mentioned to you earlier, those are not on the list of the components of training that actually improves performance. That's education.
Dr. Katz: Number two, which is really part of number one, it's death by PowerPoint. We make people stay awake through endless PowerPoint presentations. We all know that.
Dr. Katz: Number three is that we tend to focus on repetition of boring information to satisfy perceived regulatory requirements. How many times have we done GCP training for example? Is it important? Yes. Does it matter for the observed effect sides of therapy, which is what we really care about? Kind of, not so much. Which is ironic also because the regulators actually, if you read the risk-based monitoring guidelines, they actually want training that is going to improve peoples' skill in a measurable way, so we can get accurate measures of treatment effect. But yet we put words in the regulator's mouth and say, "Oh, they just want GCP training. They want protocol training, so that's what we're going to give because we have to satisfy the regulators." And yes, those things are necessary, but not sufficient to actually satisfy what is actually in those regulatory guidances.
Dr. Katz: Number four, we don't evaluate competency after we're done with training to make sure that after we're done, the trainee actually has the requisite skill that we've asked them to acquire. We don't set performance targets. We don't give practice and feedback. I'm up to number seven. This is maybe more of a subtlety, but if you look in other areas of industry where training has received a lot of attention, one of the rules that they have adopted is don't think about training as a one-time event. Think about training as a longitudinal activity. So, in the world of clinical trials, training is the investigator’s meeting. We've done the IM, we're all packing our bags, we're going home, we're done. Whereas that's an aberration in industry in general where yeah, you might have a training meeting. Great. You have one. But then you remain in communication with your trainees. You monitor your performance. You give them updates, you give them refreshers. You do re-trainings if you have to. It's an ongoing activity.
Dr. Katz: High cost. I don't have to talk about that. We don't do refreshers and follow up. Number 10 is quite important because if you don't set some kind of measurable performance target, then how are you going to know whether people are achieving it? And if you have set one, good for you. That's great. But if you're not monitoring it on an ongoing basis, then how do you know whether people are still achieving it? It's the blind leading the blind. Walter Deming, who is the founder of Statistical Process Control, first in Japan and then in the United States in the 1930s, said that if you can't measure it, you can't manage it. That, of course, is a mantra of six sigma and all that. But do we, in our clinical trials, set performance targets and actually monitor them in a quantitative way? Unless you're using central statistical monitoring systems where you're doing that, the answer is you're not.
Dr. Katz: All right. Well, if we're going to do training, how should we do it? There's been this explosion of information about principles of adult learning that have come out since this guy called Malcolm Knowles wrote a book called guess what, Principles of Adult Learning, in the '70s, and it's been re-edited and rehashed in all sorts of ways and there’s been a lot of research on it.
Dr. Katz: What are some principles of adult learning? A, adults would rather have general knowledge than specifics. B, adults tend to be very open to new learning experiences. C, adults are able to perform new skills well after hearing a detailed presentation. D, adults need practice and feedback to acquire a new skill. I'll give you a few seconds to pick an answer.
Dr. Katz: The answer is adults need practice and feedback to acquire a new skill. Adults are generally not interested in general knowledge. They want to know, tell me what I need to do today in order to do my job, and let me out of here. Adults often have rigid ideas about how they should learn, which differs from one adult to another. They tend not to be that open to new learning experiences. We have to overcome those barriers and of course, as I mentioned to you earlier, just listening about something doesn't generally need to lead to any kind of performance.
Dr. Katz: This just elaborates on some of those principles of adult learning and I think rather than go through these in detail in the interest of time, I will commend to you Malcolm Knowles' book or one of the other summaries of it. It's well worth reading if you're interested in upping your training game.
Dr. Katz: A comment about practice and feedback. So, if I've succeeded in convincing you that practice and feedback are important, then, unfortunately, it's not even that easy because like everything else, not all practice and feedback are created equal. I'll leave you with the point that practice should be structured, not just random. The feedback should be timely. It should be constructive, not just, "You didn't do that well, but actually, I think that a way that you could do this more usefully could be as follows. Why don't we practice that?"
Dr. Katz: Actually there's a lot of literature showing that if you want people to take their newfound performance skill back to the workplace, which in this place is the clinical trial, you want them to be able to make some mistakes in the practice session, in a safe environment. Mistakes that are easily corrected, so that they can learn how to do things more and more reliably. It's actually learning how to overcome those mistakes that's the main factor that determines whether there'll be what's called training transfer, which is a carryover of the new skill to the actual workplace.
Dr. Katz: So you've now developed a training program. You've set a performance target, good for you. It's measurable. You've come up with a training experience for the patient that actually has them practicing and getting feedback. Good for you.
Dr. Katz: Now you want to know did my training work or not? So that's called training evaluation. And which of the following statements about training evaluation are true? A, the most important aspects of training are the easiest to evaluate. B, transfer refers to the improvement of real-world performance that occurs as a result of training. C, there are no accepted frameworks for the evaluation of training. And D, the most important aspect of training is the immediate reaction of the participants. I'll give you a few seconds.
Dr. Katz: So as I mentioned, this is the definition of transfer. That's what matters. If you don't achieve transfer, your organization gains no return on investment from the money spent on training. It's about transfer.
Dr. Katz: Next, there's lots of ways to determine whether your training program is good. It works and from left to right is in the order of least useful but easiest to obtain all the way to the right where it's most useful but most difficult and expensive to obtain. So it's always a balancing act. Rows bouncing where we're evaluating training from doing what's doable, but also trying to do what's useful. So you know, "Hey Joe, you just went through this training, but let me fill out your learner evaluation form." Right? Most investigator's meetings try to get people to fill out an evaluation form.
Dr. Katz: They're pretty informal and they're not nearly as good as more formal methods. Then we go to cognitive debriefing, and let me ask you whether you actually understood what I just told you. What words made sense? What words did not make sense? What could have been worded more clearly? Did we cover this concept in a way that was impactful to you? So when we develop training programs at Analgesic Solutions, we try to do cognitive debriefing with patients, especially on those training programs.
Dr. Katz: You can do pre/post studies. There are a number of these in the literature. How well does somebody diagnose osteoarthritis based on ACR criteria? Then I put them through the training program, and then I evaluate their performance again, pre/post studies.
Dr. Katz: You can do cross-sectional studies. I did some trials where I included training. I'm going to compare it to trials which did not include training to see whether the training made a difference. We actually just did that with a back pain trial that we did for a sponsor where we included training, and we published a poster that looked at the placebo response in that trial compared to others. We were able to show that the placebo response was lower in the trial. We did the training and then the other published ones.
Dr. Katz: Then finally randomized control trials. You can actually do a randomized controlled trial of your training program if you have a million dollars that you'd be prepared to spend and the time to do it, and sometimes that's worth it. We actually were able to get funding from Grünenthal, for example, to do that randomized controlled trial of accurate pain reporting training that I showed you in a previous slide. So some sponsors are willing to put their money where their mouth is and make that investment. But as you might imagine, those are hard to get funded, but incredibly useful when you can get funding for those kinds of studies.
Dr. Katz: So those are different ways of evaluating training programs in our world of clinical trials.
Dr. Katz: In the big world out there, the most famous training evaluation rubric is this so-called Kirkpatrick Evaluation Schema that was developed by this very smart guy called Don Kirkpatrick in the 1950s. He still actually speaks at training conferences. And what he says is that he's more shocked than anybody else that this very simple rubric is actually still being used 50 years later by people. But it's endured the test of time.
Dr. Katz: It's got these four levels: What is the reaction? Was there a change in people's knowledge, attitudes, and beliefs during the training? Number three is starting to get real, which is was there actually transfer of behavior? And then number four, the hardest of all is what was the return on investment of the training to the organization? A level of training evaluation that is seldom achieved in a meaningful way, but which everybody strives for.
Dr. Katz: Guess what? I've got another question for you. You're all evidence people, right? You're in the world of science and clinical trials. If you didn't care about evidence, you wouldn't be doing what you do. Alright, well, I have just given you the old song and dance about training. Well, what's the evidence that in my world of clinical trials that training improves performance? I've shown you a few snippets, but I'll ask you anyway. A, there was a large evidence base of effectiveness of training in other areas, but little evidence in clinical trials. B, there was little evidence of effectiveness of training in any discipline. C, there are multiple studies indicating effectiveness of training in clinical trials but not as much evidence as in other areas or other industries. D, improving performance in clinical trials is not even important enough to study.
Dr. Katz: There are actually multiple studies indicating the effectiveness of training in the world of clinical trials but not nearly as much evidence as in other industries and other areas. That's my correct answer.
Dr. Katz: So, I just said there are studies showing effectiveness to training clinical trials. Well, what are they? And, again, there are more than what's on this slide. This is just a little snippet of the literature just to hopefully indicate to you that there are such studies. I'd be happy to send you these papers. I'll just highlight a few of them and some of the citations are at the bottom.
Dr. Katz: There are studies on Rater training, which is where one person is rating the clinical state of another. This is from a meta-analysis by Sadler in 2017, a very nice meta-analysis showing that out of 17 studies that he was able to find on examining Rater training, 13 out of those 17 did show improvement of people's skills. Of the four that didn't, there are reasons why they didn't.
Dr. Katz: There are studies showing that training people to diagnose rheumatoid arthritis and osteoarthritis improved skills. There are studies showing that you can improve the delivery of standardized physical therapy, but coping by training physical therapists actually over the Internet, and I'm not gonna go on and on to these other things, but suffice it to say that we can actually show that training improves the performance measurably of skills that are necessary to have successful clinical trials. So you should be heartened.
Dr. Katz: There's this concept that I call “validated training”. Not all training can be considered to be validated training, and I have to always, when I use this term, acknowledge Mike Kuss from Premiere Research. He was the one that actually planted this phrase in my mind a few years ago. We've been working with it ever since. These are the steps of what I call developing a validated training program. By validated, I mean a training program that is designed in such a way that is likely to work and that you can demonstrate that it works.
Dr. Katz: So I've made most of these comments already during this presentation, but I'm just going to organize them onto one slide. First, this is the most important and this is the step that's always skipped. What skill are you trying to improve performance in and what is your performance target?
Dr. Katz: If you don't know how many millimeters off of the circuit board the transistor is supposed to end up at after your machine has glued it on, you can't know whether your machine is doing the job within spec or not. Now here it's more squishy because we're asking human beings to sometimes do squishy things like report subjective endpoints. It's not an excuse. You can define performance in that realm as well. Then design your training content. What should be in my training program that's going to make people do better, and what is going to be my mode of delivery online, in person, ebook, et cetera?
Dr. Katz: Once you've developed your training content and your mode of delivery, test it. Run it by people, preferably the people that are the people that are going to be taking the program in the real world, patients for example, or study coordinators. Get feedback from them, modify accordingly.
Dr. Katz: Then number four is for you implement and number five is you evaluate the results of your implementation, preferably informed by a Kirkpatrick model that I mentioned earlier, which is on this slide. There are lots of different levels of step five which is evaluation, which I already showed you on a previous slide. That's validated training.
Dr. Katz: So, in conclusion, what I hope that I have at least provisionally convinced you of or at least planted the ideas in your mind is that number one, training has a purpose. It's not just a checkbox activity. It's not just because the regulators want it. It's to optimize performance, not just generally, but specific tasks of people who are functioning within the realm of your clinical trial.
Dr. Katz: Number two, don't jump right to training. Training is only one of several methods for preventing problems and improving them when they happen.
Dr. Katz: Number three, the principles of what type of training improves performance are actually fairly well known, and it boils down to practice and feedback. If training follows those principles, it's been demonstrated to work, not in every area, not every training program, but in principle, it works. And I would encourage you to think about that it's more than just, “let's put together 12 PowerPoint slides because they fit together in the 20-minute time slot”. But let's think of that validated training approach because if the training is not going to achieve its goal, it's a waste of everybody's time.
Dr. Katz: And I will leave you there. Lauren, let's see if anybody has any questions for us.
Lauren Ozmore: Sounds great. Thank you so much Dr. Katz. It was very informative. So I am going to run straight into questions, and it looks like we've gotten a lot of them. The first one that I want to ask you: what advice do you have for sponsors, CROs, or sites who are looking to improve their performance targets?
Dr. Katz: Yep. So great question. I would start with a data quality risk assessment of your protocol and preferably do it at the Protocol Synopsis Stage because it's often the case that you'll learn in that data quality risk assessment that there are potential performance problems that are baked into your protocol that can be designed out of your protocol. And you want to ideally do that before your protocol is finalized. You know, I get these things when they've already finalized and then nobody's interested in amending their protocol unless they absolutely have to.
Dr. Katz: And then once you've done that, then that suggests what the key areas of performance problems are that you can anticipate that are going to matter for your clinical trial. And that, in turn, suggests things you can do such as training to prevent such performance problems. And from my perspective, very importantly, since we do a lot of central statistical monitoring of ongoing clinical trials, it also helps tell you what you should be monitoring during our clinical trial that could signal an important performance problem.
Lauren Ozmore: Terrific, thank you. Next question, would validated training reduce placebo response and pain trials only or across indications?
Dr. Katz: Yeah, actually much of the placebo response literature or the literature on what drives the placebo response is actually not in pain at all. In fact, the best study I think on factors that drive the placebo response in drug trials is actually in asthma, of all things. And there are studies in Parkinson's Disease and depression and other areas. So no, pain is a great learning laboratory for trying to develop approaches to addressing the placebo response, but the lessons apply broadly beyond pain.
Lauren Ozmore: Thank you for that explanation. So with us coming close to the hour, I want to ask you one more question. What training should I have in my clinical trial and how do I figure out what type of training I need? Who should be trained, what type of trainings should I be thinking about? Can you give us some insight there?
Dr. Katz: Again, I think that begins from your data quality risk assessment, where you identify the critical areas of performance, not only study staff with patients, study coordinators, etc. And there are off the shelf training programs available these days that a number of people offer in things like placebo response reduction, for example, or helping researchers diagnose patients effectively. But then there are things that need to be created specifically for the specific trial that are not available yet. I would encourage people to try to create those new training programs in a rigorous way and not do it one day before your first patient is due to enter your trial.
Lauren Ozmore: Terrific. Well, thank you so much for answering those couple of questions. It does look like our time is coming to a close, so I'm going to go ahead and wrap us up. If we didn't have time to answer your question during the webinar, we will reply to you via email very soon.
Lauren Ozmore: To all of our attendees, we hope that you enjoyed this webinar and that you found it relevant to your clinical research practice. We're delighted to make a recording of the event available to you that you can watch on-demand or share with your colleagues. Please keep an eye out for that recording within the next 24 hours.
Lauren Ozmore: As always, we're eager to continue the conversation with you. If you have any questions for Dr. Katz following the broadcast, please feel free to reach out at firstname.lastname@example.org.