Recruiting for Your Pragmatic Clinical Study
5. Maximize Your Return on Recruiting for Your Pragmatic Study, Part 1
In a randomized controlled trial, the control group receives near-idealized care, which is critical to establishing whether the study’s treatment caused the result. This not always being the case in the real world is a powerful reason to explore how the treatment fares under a wide variety of care “quality.” To gain actionable insight into the impact of your therapy—as well as other aspects of care—on outcomes, how can you recruit with that in mind?
Series 
1
Episode 
5
Published on
January 11, 2019
There are several factors that need to be considered in designing a recruitment plan for your pragmatic study.
“If people don’t want to come out to the ballpark, nobody’s gonna stop ‘em.” - Yogi Berra (1)

“How do I find primary care physicians and endocrinologists who are willing to put a total of 1,000 type 2 diabetics, who are not well-controlled on metformin, for a pragmatic study of a recently FDA-approved extended-release metformin patch, and report their data so that we can find out how the patch is used and its outcomes in clinical practice?” (See important note 2)

Welcome back! Our opening question illustrates a question about recruitingfor a particular type of widely-distributed pragmatic study (WDPS). We introduced pragmatic studies, including the WDPS in Posts 2 and 3 of this series. We know how the product performed in the tightly-monitored environment of its supporting clinical trials. Now we want to know how it is used, and how well it performs, with the wide variety of practice environments and patients in the real world - that’s where the pragmatic study comes in. But like prospective randomized clinical trials, we must recruit and retain enough physicians and patients.

We’ll take on recruiting in two posts: Part 1 looks at barriers to recruiting and the special considerations for pragmatic studies; Part 2 takes a deeper dive into recruiting for a widely-distributed pragmatic study - the main focus of this series because it most-faithfully represents the wide range of healthcare delivery.

Cut to the punchline: Pragmatic clinical studies can provide actionable insights about how well treatments work in the real world. This may be particularly true in a well-designed ‘widely-distributed pragmatic study (WDPS),’ which looks at treatment use and effectiveness in a wide variety of physicians, care delivery settings and patients. But there are special considerations for recruiting for a WDPS (which may be randomized or allocated based on the doctor’s treatment decision).

Recall that in a clinical trial, patients in both the study and control groups receive the same care except for the specific intervention. The quality of that care is uniformly high, protocol-driven, and adherence to both the intervention and the comparison treatment (which may be a placebo) is often much higher than in the average clinical setting. The between-groups difference in outcomes therefore doesn’t include care quality. But in the real world, it might. For example, some of the poor control (as measured by A1C) of our oral metformin patients might be due to poor adherence to metformin or other treatment factors; simply having optimal care with oral metformin might improve their control enough so that they wouldn’t need to be in the study. (3)

Chart showing signal and noise in a situation with variations in care versus with no variation in care
A study compares a new therapy to usual care. On the right, patients are randomized or otherwise experience the same care except that some receive the new therapy and others a comparison therapy or usual care. Since only that one element--the new therapy--differs, we can make valid statistical inferences about whether the outcomes differ between the 2 groups. The signal is distinguishable from noise (subject to the constraints of statistical conclusions). On the left, patients are not randomized. Differences in outcomes (the signal) are difficult to distinguish from noise (all the other factors that could influence outcomes). The situation is further muddied if patients receiving the new therapy enjoyed improvements in care, access to care, or support in being more adherent to care.

Chart showing impact on outcomes for optimal care and optimal care plus new treatment
Could it be that the control group in a PRCT has better outcomes than they would have had in the real world due to their overall care and adherence improving? Above: An outcome was 57% (20 points) better in an optimized-care group--no use of the new therapy. Adding the new therapy to optimized care further improved the outcome by 10 points, or 18%. This may help explain why randomized studies often show less impact on outcomes than observational studies of the same therapy.
The WDPS may give us insights about the effect of improving overall care by showing us variations in outcomes in both the treatment and the comparison groups.

To gain actionable insights into the impact of the metformin patch under real-world conditions, we’d want to compare treatment patterns, adherence, and outcomes for a broad spectrum of physicians and their patients who meet the patch’s approved criteria. One way to do this would be to randomize doctors or patients to continued usual care (with oral metformin), optimized care with oral metformin, placebo patch, or metformin patch. There are some important caveats and nuances here: Only the two patch groups could be blinded; patients who consent to ‘usual care’ might improve their care; and the treatment trajectories of all patients could evolve depending on glycemic control and other factors.

But, you say, aren’t the ‘caveats and nuances’ why we want to do pragmatic studies? Absolutely! However, consent for randomization is itself an intervention with clinical consequences. In a retrospective study, we don’t have to take the effect of being willing to be randomized and watched over into account. In pragmatic studies, we might do:

  • A properly-randomized pragmatic study that randomizes patients or practice sites
  • An allocation pragmatic study that assigns physicians or practice sites to deliver the intervention or the comparison (or usual) care based on factors such as convenience or agreement to deliver the intervention and monitor the results

A structured observational pragmatic study that takes advantage of a natural experiment in which for appropriate patients, some doctors deliver the treatment and some don’t. What makes this type of study pragmatic is that some physicians are asked (or reminded that the treatment option is available) to offer the treatment (or test) to clinically-appropriate patients, then to track specified results over time. Once treatment starts, the patients and their doctors deliver their ‘usual care,’ (which may or may not include the study’s treatment) during which the patient may show the usual range of adherence; and doctors respond according to what they believe is right. Several methodological issues must be attended to in designing, delivering and analyzing this type of study; some are noted in (4). It’s best engage experts in these activities.

We’ve been discussing the widely-distributed pragmatic study (WDPS) as a best-practical-match to the real world. While a WDPS can assume any of the pragmatic study forms, we think it could find a home with the structured observational design. Like all prospective studies, recruitment can be a stumbling block - too slow or not enough “N” to be confident about your findings.  

Why do clinical trials fail to recruit or retain N? According to an analysis in Applied Clinical Trials (5), the most common reasons are failure to meet the primary efficacy endpoint, safety, and failure to demonstrate value compared with existing treatments. But recruitment and retention are also big: A study by Tufts Center for the Study of Drug Development (6) found that ⅔ of study sites didn’t meet their subject accrual goals for phase III clinical trials (though all sites combined may have). Even in cancer trial--arguably the poster child--nearly one of five publicly-funded trials failed to enroll enough N. (7)

Lopienski’s article (8) offers an evidence-based, actionable scheme for the barriers to recruitment and retention:

  • Overly-complex protocol, including multiplicity of inclusion/ exclusion criteria
  • Not having a well-prepared recruitment plan that’s reviewed and approved in advance
  • Using recruitment materials that don’t reflect participants’ motivations. Commonly, these motivations include advancing medicine, helping to save or improve lives, and improving one’s own condition. (8)
Top participation reasons chart
From Reference (8).

While there are areas of overlap between PRCTs and pragmatic studies regarding efficient and effective recruiting and retention, it’s the differences that are critical to understand:

  • In non-randomized pragmatic studies, recruiting may focus on physicians (who decide which patients are appropriate for treatment, just like in real life). Because the pragmatic study format (especially the WDPS) is new, there’s little published research, but we encourage you to survey both physicians who agreed and who declined to participate, to understand the reasons
  • Retention is a somewhat different animal in pragmatic studies. With less day to day logistical support, clinicians may become distracted by the competing demands of practice and patients may become non-adherent to treatment and monitoring (in fact, gaining insight into dropout-drivers is a big reason for doing pragmatic studies)
  • Physicians may need to take a larger role in using the study information platform than with PRCTs, where much of the work is performed by research assistants
  • Inherently, patients are much likelier to be motivated by the prospect of improving their condition (and possibly cost-share relief, if applicable)...than by contributing to science

OK, I understand how recruiting works for a PRCT, and I understand that there are some important differences for a WDPS, but how exactly does recruiting work for the WDPS? That’s what our next post will be all about.

Summary

  • Carefully-designed pragmatic clinical studies can answer questions that PRCTs are not designed to get at, such as the effectiveness of a treatment in the wild – the real world of clinical situations, patients, physicians and healthcare settings
  • An important way in which the real world differs from PRCTs is that in the latter, patients in both intervention and control groups receive very similar, standardized, high-quality care. In the real world, patients experience wide variation in care quality and treatment adherence. Ideally, real world studies can distinguish the effects of variations in care from those of the treatment under study.
  • Pragmatic studies can be organized in a variety of ways ranging from structured observational with data collection through patient, physician or healthcare site allocation, all the way up to randomization.
  • In all of these scenarios, efficient recruitment and retention are key to achieving the study’s goals in a satisfying timeframe. In the ‘widely-distributed pragmatic study,’ physicians are less likely to be trained or experienced in recruiting patients and conducting studies, as compared with formal PRCTs.
  • Assistance from experts and a facilitating technology platform are essential to making your pragmatic study succeed: topics for our coming episodes.

Do you want to know more? Find us HERE!

NOTES

  1. I wonder why I love this quote. Is it because it almost, but not quite, makes sense?
  2. Though not on the market, some data has been published on this treatment. It is important to note that a pragmatic study in which physicians are asked to put clinically appropriate patients on a specific therapy (which is one option among approved, clinically-justifiable alternatives) is only one possible design, and may not be appropriate in all cases; another possibility is to frame the study as looking at the consequences of being treated with the clinically-justifiable approved alternatives, of which one is, in this case, a metformin patch. In this approach, patients may be randomized to the alternatives or the physician and patient may together choose one of the alternatives.
  3. To see this, imagine a PRCT that randomized patients with poor control on oral metformin to three groups: Continued usual care; placebo metformin patch; and actual metformin patch, and that the second and third groups received a more meticulous, monitored, and supported level of background care. If the patch is efficacious, we wouldn’t be surprised to control in actual patch < placebo patch < continued usual care
  4. Though tempted to dive into this particular rabbit-hole, the main points are (a) patients of physicians you don’t reach out to may have different tendencies towards agreeing to the test or treatment, adherence to treatments, or medical or socioeconomic factors that influence the treatment/outcome relationship; (b) Physicians you reach out to and who agree to participate may differ in practice styles or in the patient tendencies mentioned in (a); (c) Patients of participating physicians who agree to the treatment may likewise differ from those who don’t; (d) It may be impossible to adjust for these ‘differing’ factors in your analysis; the best you can do is to ensure that the factors you believe likely to drive the above-mentioned differences are equal (or can be rendered equal through various kinds of matching).
  5. www.appliedclinicaltrialsonline.com/phase-iii-trial-failures-costly-preventable?pageID=1
  6. Stopke E, Burns J. New drug and biologic R&D success rates, 2004-2014. PAREXEL’s Bio/Pharmaceutical R&D Statistical Sourcebook 2015/2016.
  7. Sacks LV, Shamsuddin HH, Yasinskaya YI, et al. Scientific and regulatory reasons for delay and denial of FDA approval of initial applications for new drugs, 2000-2012. JAMA. 2014;311(4):378-84.
  8. Cited in Lopienski K. Why do recruitment efforts fail to enroll enough participants? https://forteresearch.com/news/recruitment-efforts-fail-enroll-enough-patients/, accessed April 19, 2018.
  9. Bennett CS, et al. Predicting low accrual in the National Cancer Institute’s Cooperative Group clinical trials. J National Cancer Institute 2015;108(2)
  10. 2017 Perceptions & Insights Study. The Center for Information & Study on Clinical Research Participation.
Recruiting for Your Pragmatic Clinical Study
 S
1
 E
5