“How do I find primary care physicians and endocrinologists who are willing to put a total of 1,000 type 2 diabetics, who are not well-controlled on metformin, for a pragmatic study of a recently FDA-approved extended-release metformin patch, and report their data so that we can find out how the patch is used and its outcomes in clinical practice?” (See important note 2)
Welcome back! Our opening question illustrates a question about recruitingfor a particular type of widely-distributed pragmatic study (WDPS). We introduced pragmatic studies, including the WDPS in Posts 2 and 3 of this series. We know how the product performed in the tightly-monitored environment of its supporting clinical trials. Now we want to know how it is used, and how well it performs, with the wide variety of practice environments and patients in the real world - that’s where the pragmatic study comes in. But like prospective randomized clinical trials, we must recruit and retain enough physicians and patients.
We’ll take on recruiting in two posts: Part 1 looks at barriers to recruiting and the special considerations for pragmatic studies; Part 2 takes a deeper dive into recruiting for a widely-distributed pragmatic study - the main focus of this series because it most-faithfully represents the wide range of healthcare delivery.
Cut to the punchline: Pragmatic clinical studies can provide actionable insights about how well treatments work in the real world. This may be particularly true in a well-designed ‘widely-distributed pragmatic study (WDPS),’ which looks at treatment use and effectiveness in a wide variety of physicians, care delivery settings and patients. But there are special considerations for recruiting for a WDPS (which may be randomized or allocated based on the doctor’s treatment decision).
Recall that in a clinical trial, patients in both the study and control groups receive the same care except for the specific intervention. The quality of that care is uniformly high, protocol-driven, and adherence to both the intervention and the comparison treatment (which may be a placebo) is often much higher than in the average clinical setting. The between-groups difference in outcomes therefore doesn’t include care quality. But in the real world, it might. For example, some of the poor control (as measured by A1C) of our oral metformin patients might be due to poor adherence to metformin or other treatment factors; simply having optimal care with oral metformin might improve their control enough so that they wouldn’t need to be in the study. (3)
To gain actionable insights into the impact of the metformin patch under real-world conditions, we’d want to compare treatment patterns, adherence, and outcomes for a broad spectrum of physicians and their patients who meet the patch’s approved criteria. One way to do this would be to randomize doctors or patients to continued usual care (with oral metformin), optimized care with oral metformin, placebo patch, or metformin patch. There are some important caveats and nuances here: Only the two patch groups could be blinded; patients who consent to ‘usual care’ might improve their care; and the treatment trajectories of all patients could evolve depending on glycemic control and other factors.
But, you say, aren’t the ‘caveats and nuances’ why we want to do pragmatic studies? Absolutely! However, consent for randomization is itself an intervention with clinical consequences. In a retrospective study, we don’t have to take the effect of being willing to be randomized and watched over into account. In pragmatic studies, we might do:
A structured observational pragmatic study that takes advantage of a natural experiment in which for appropriate patients, some doctors deliver the treatment and some don’t. What makes this type of study pragmatic is that some physicians are asked (or reminded that the treatment option is available) to offer the treatment (or test) to clinically-appropriate patients, then to track specified results over time. Once treatment starts, the patients and their doctors deliver their ‘usual care,’ (which may or may not include the study’s treatment) during which the patient may show the usual range of adherence; and doctors respond according to what they believe is right. Several methodological issues must be attended to in designing, delivering and analyzing this type of study; some are noted in (4). It’s best engage experts in these activities.
We’ve been discussing the widely-distributed pragmatic study (WDPS) as a best-practical-match to the real world. While a WDPS can assume any of the pragmatic study forms, we think it could find a home with the structured observational design. Like all prospective studies, recruitment can be a stumbling block - too slow or not enough “N” to be confident about your findings.
Why do clinical trials fail to recruit or retain N? According to an analysis in Applied Clinical Trials (5), the most common reasons are failure to meet the primary efficacy endpoint, safety, and failure to demonstrate value compared with existing treatments. But recruitment and retention are also big: A study by Tufts Center for the Study of Drug Development (6) found that ⅔ of study sites didn’t meet their subject accrual goals for phase III clinical trials (though all sites combined may have). Even in cancer trial--arguably the poster child--nearly one of five publicly-funded trials failed to enroll enough N. (7)
Lopienski’s article (8) offers an evidence-based, actionable scheme for the barriers to recruitment and retention:
While there are areas of overlap between PRCTs and pragmatic studies regarding efficient and effective recruiting and retention, it’s the differences that are critical to understand:
OK, I understand how recruiting works for a PRCT, and I understand that there are some important differences for a WDPS, but how exactly does recruiting work for the WDPS? That’s what our next post will be all about.
Summary
Do you want to know more? Find us HERE!
NOTES