
SCOPE 2022: Sharing Actual Study Data to Bust Clinical Trial Myths around DCT
Last week, I had the opportunity to attend the SCOPE 2022 conference in Orlando for the first time in two years. Some of the speakers were in-person, some were virtual, some had pre-recorded sessions. I got to reconnect with my mentors and meet new people who inspired me. SCOPE 2022 was rich with learning opportunities from experts, innovators and solutionists. And, as a bonus, all the live and virtual content is available on-demand for anyone who registered. I plan to attend some of those sessions from my desk back here in Northern California.
I also got to present data from a trial that Curebase ran, before I joined, and it’s some of the first end-to-end data I’ve seen that compares operational differences between a fully virtual, a hybrid and a traditional trial experience. This is the “Evidence of Impact” that sponsors, sites, and providers have been waiting for, and will help drive the adoption of decentralized clinical trial (DCT) solutions into trials as a best practice. Other companies are starting to recognize the importance of Evidence of Impact, as well - Sanofi, for example, will be releasing data shortly, comparing the three models in one of their studies.
We analyzed the operational metrics around some key performance indicators (KPIs) that are often used by clinical study teams. The study was a pivotal trial for a digital intervention to manage Irritable Bowel Syndrome (IBS), which has now achieved FDA authorization as a digital therapeutic. The study required participants to use the digital intervention over a course of several weeks, and to record their symptoms and answer questionnaires daily using the Curebase DCT platform.
Curebase supported this study with fit-for-purpose technology and virtual site services, managing the entire patient journey from trial recruitment, through pre-screening, consent, enrollment, study conduct, and post-treatment follow-up for nearly 2 years. Participants used several digital solutions, from completing the pre-screener, to telemedicine visits, remote and in-person consent, engagement reminders and visit scheduling, and ePRO data collection. They were supported by our virtual site team, including a remote PI and virtual research coordinators. The study was conducted in both traditional sites and virtual-supported sites across 13 states in the US.

The participant experience looked like this:

We used a mythbusting framework to present the data, and I think it helps demonstrate that evidence of impact we’ve all been waiting for.
Myth 1: Older patients are not willing to use technology.
I hear this a lot - and there are definite differences between groups of patients, depending on what’s being asked of them. But it’s important not to assume that age is the key barrier to technology adoption, and the data busts the myth. My hypothesis is that the combination of high tech solutions with high touch support (the virtual coordinators) was the key to enrolling patients across the different age groups. Across the whole population, the mixed or hybrid model had the highest enrollment. And the highest enrolling group in the study was over 50 years old.

Myth 2: Different racial/ethnic groups won't participate in clinical trials
Next we looked at the racial demographics of the participants. Enrolling clinical trial participants who match the epidemiology and incidence of disease has been a big problem for a long time. Many of us are hoping that using DCT technology will create access for participants who are typically underrepresented so that the trial data better reflects the disease population.
The demographic data for this trial look very different from most traditional trials, and the fully virtual model enrolled the most diverse and inclusive group of participants.
This is a single trial, and there are definitely real barriers to participation for patients in terms of access and trust. This data shows that it is possible to enroll diverse patients with the right study design and the right combination of technical and virtual solutions.

Myths 3 and 4 are about recruitment.
Online recruitment has become a standard tactic in trials and it works - for a proportion of recruitment. There’s also been a long-standing belief that patients are less likely to qualify to enter a trial if they are not known by research sites (i.e. already in their databases). And that has happened in some trials; sites have reported that a high proportion of pre-qualified patients don’t actually meet entry criteria or will not proceed to consent. That didn’t happen in this trial and it’s really encouraging!
The virtual site recruited 132 patients, the brick and mortar recruited 18, and the mixed/hybrid sites recruited 670 patients. Impressive! As are the metrics on the patients who completed screening to enrollment. The brick and mortar and hybrid/mixed sites had similar enrollment efficiency of almost 80%, while the virtual model enrolled 90% of the patients who screened. That’s really different from any trial I’ve ever been part of - and it’s a very positive data signal that using the high technology/high touch approach for clinical trial conduct can drive more efficient enrollment.

The next metrics are all about protocol adherence and patient retention.
Once these patients enroll do they actually complete the trial assessments so you get complete data? And do they actually stay in the entire study, or do they drop out, resulting in lost data? There have been lots of concerns raised across the entire clinical trial ecosystem that the quality and completeness of data could be compromised by using alternative site and data collection models. The data from this trial disproves those concerns.
The adherence rate for study assessments was very comparable across the three site models. Adherence was approximately 80% across the study for patients enrolled at the virtual and mixed/hybrid sites, and about 71% at the brick and mortar site. Pretty remarkable - and even more so when we recall the volume of patients enrolled (132 virtual, 760 mixed/hybrid, and 18 brick and mortar). So, not only did the high tech/high touch patients better comply with protocol requirements, there were also a lot more of them.
Retention is generally a big problem across studies, and there have been lots of tactics and support models developed to help address that problem. Patient drop-out can be the reason studies fail (through the loss of the data needed to prove the scientific question being studied). The data for this study is interesting - and directionally positive. The virtual model had the highest patient retention (~73%), followed by the brick and mortar (~70%), and the mixed/hybrid model (57%). But, remember the volume of patients in those three models again, and the signal gets stronger. The virtual and hybrid models do work to retain patients with, of course, the understanding that every model has its limitations.

The Patient Experience
Ultimately, the operational data strongly suggests that the virtual and hybrid/mixed models can work efficiently to find, enroll, and keep patients in trials with a high degree of protocol compliance. But what about the patient experience? How easy was it for patients to use the technology? Did the technology solutions make a difference in terms of patient access?
We asked the participants directly, using some simple questions.

The participants who answered (23% - a higher proportion than I expected!) provided very positive feedback. These are straightforward questions - not a deep dive - on user experience, and their responses are super encouraging. Not only did they find it easy to use the technology solutions throughout the trial, but the technology solutions also made it easy to participate in the first place. Essentially, participants were able to access and remain in the trial because of Curebase’s digital tools and the support of the virtual coordinators. This is exactly what we’re trying to solve with DCT solutions.
We also asked participants to describe their experience using the Curbase tools in one word.
Simple, convenient, easy, and user-friendly were the top words used.
This data offers the never-before-seen proof that we have been waiting for: patients can use these technology tools, they find it easy to use the tools, and having the tools makes it possible for them to be in trials.
Trial participation and recruitment has been a challenge for decades. And, we wondered whether our trial participants would be willing to be in another trial if similar Curebase technology tools were part of it. So we asked. And, we are thrilled to share the results!

In summary, the data is really positive across many of the typical KPIs used by project teams, and the participant experience data is very encouraging. I am very aware that this is a single trial. There are differences from many of the trials you and your teams are working on every day in terms of the level of intervention, the nature of the patient's disease, etc. I can hear all your questions and concerns about applicability - I have had them for years too! But, this data does provide directional evidence of impact for decentralized clinical trial solutions and services. If we want to make trials more accessible, retain more patients, and ensure that data is complete it’s time to focus on designing and conducting trials based on what the data proves is working. The combination of high tech and high touch solutions that are easy to use and simplify participation is one area Curebase is successfully reinventing the patient journey. Let’s keep modernizing clinical trials so we can get new medicines to patients faster.
Learn more with our blog on How sponsors should be rethinking quality and oversight for decentralized clinical trials.