New technology, ways of working and regulatory policy are creating opportunities to mitigate the risks and complexity of clinical research. The clinical trial optimization space is complex and evolving rapidly. To help clinical trial sponsors understand the challenges and opportunities in this space, ZS’s Siddharth Shah and Sourav Das, associate principals in ZS’s clinical excellence practice, sat down with Rob Scott, MD, retired head of development and chief medical officer of AbbVie. Scott has extensive experience in clinical trial optimization and the development of design centers to address systemic issues in clinical development and beyond. Their discussion focused on understanding the broad challenges in the trial design space, including identification of a suboptimal design function and potential solutions to address it, such as including better utilization of real-world data (RWD), embedding artificial intelligence (AI) and machine learning (ML), human-centric design and bringing multidisciplinary teams together.
Siddharth Shah: Why is the trial design optimization space so complex and how is the industry evolving in this space?
Rob Scott: I think that one of the reasons it's so complex is we are trying to leapfrog from a 1990s way of thinking about designing clinical trials into a 21st century way. The tools are different now and teams are still using past experience to design clinical trials. The leap into RWD to optimize protocol design and execution is pretty big. There are a lot of tools that you could use with RWD and people just don't have experience with them.
A big challenge is the routine use of AI to drive decision making during design. This is new for most people and very new for the industry. There's still a big credibility gap for AI amongst potential practitioners. In clinical research, people are really focused around ’If it ain't broke, don't fix it.’ The fact is, it is broken and we do need to fix it.
Finally, there is a need to bring more innovative thinking into trial design teams to help challenge the status quo.
Sourav Das: In addition to an already complex trial design function, we now have platform trials, basket trials and other innovations coming into play. Are sponsors ready for this and where do you see things heading in the future?
RS: While I think that platform trials and basket trials were an innovation and answered some structural problems, I think they can hold back other forms of innovation.
Platform and basket trials end up with several sponsors that all have different appetites for change and the level of innovation devolves to the lowest common denominator. To sell the whole concept you've got to sell something everyone can believe in. I always say the IQ of a room is inversely proportional to the number of people in the room. The more people you have involved with a decision, the more that decision tends to go down the middle of the road.
SS: What are some symptoms of a nonoptimized trial design function within a sponsor organization?
RS: I think the key symptom is that studies take longer to execute than you think they will. A few other symptoms include:
- Clinical trials don’t meet projected enrollment timelines.
- A large number of investigational sites are required to meet timeline goals, leading to excessive use of contracting and monitoring resources, inefficient investigational product planning and increased possibility of poor quality.
- Unforced protocol amendments that are not due to new regulatory agency requirements or new safety data.
- The protocol population does not match the expected commercial population.
- While under enrollment is always bad, significant over enrollment suggests the design team didn’t understand the likely protocol performance during design.
SS: Why is it practically impossible to consistently—and the keyword is “consistently”—deliver a best-in-class protocol in a typical trial design function set up?
RS: The issue is teams range from excellent to very poor in their understanding of the skills required to design an optimal protocol, such as incorporating RWD in protocol design and optimization. A big issue is that teams don't approach protocol design very frequently and they lack familiarity with the tools that can be used. That's why I feel that you need to have people who are involved with trial design and optimization daily to guide them through that process.
SS: Given that trial design function is complex and needs multiple teams, what are key challenges of bringing these diverse capabilities together in a centralized capacity?
RS: What I see is that organizations are approaching this in three ways. One is to ignore this and say, ‘I don't have a problem. I'm just going to carry on doing things the way that I've always done them.’ That is probably the most common attitude in the industry. The other is to say, ‘This stuff looks really interesting and helpful. Let me set up some capability in the organization to do this and I'll offer it as a consultative service to teams.’ The third way is to create a central organization with these capabilities and build processes in the governance of clinical research such that teams are obliged to use it.
I think the third way is the only way to be successful. Part of the reason teams don't use these services is because of the way we measure performance and set goals. A very common but dysfunctional milestone is first subject screened or first subject randomized. This goal discourages teams from spending more time designing a fully optimized protocol that will really fly because they are too focused starting quickly.
I always say, I don't care when you screen the first patient. What I really care about is when you screen the last patient or when you randomize the last patient.
SS: Now that we have spoken about the need for a central function or capability, what are your thoughts on whether the design center concept is still a viable option?
RS: Rather than having your therapeutic area teams spend time becoming experts on the processes they don’t perform frequently it might make more sense to create a function that can lead them through the process. We’ve talked about how a design center or a design capability is the combination of three things. It's the people, the technology and the processes. They are all equally critical to a successful implementation. Unfortunately, organizations that are trying to create a design function tend to focus on technology without thinking of the other two.
SS: In your experience with design centers, what have been some symptoms that aspects of it are not working?
RS: To be successful, there has to be a commitment to do this from the top of development. I think an important concept here is the “no pilot” concept. If your idea is to pilot this process and see if it's successful you are going to fail because the level of commitment required to do this well is very high. But it’s very worthwhile. In addition to commitment to the concept, management has to be very committed to change management and culture. If you don’t successfully persuade your organization that this is critical and that it's not just another useless process being forced on the organization, it will fail. You have to create the burning need to change.
SS: What is your advice for companies that are trying to build the centralized capability (design center) completely in house? This kind of development is the equivalent of a biopharma getting into the business of building the next tech company or the next biostats company from scratch. What are your observations about balancing some of these within the sponsor company versus external expertise?
RS: If you are a rare company that has all the tools and skills you need, go for it on your own. I don't think there is such an animal. You're always short on tools or skills. One of the big issues with doing it internally is the question of throughput. While you're busy creating this organization, you won't have the throughput to deal with all your therapeutic areas. So, if you're not going to do this in a pilot role, how do you instantly create that throughput while you're trying to recruit people and put elements in place? One way is to work with a partner. There isn’t a pool of people out there that you can hire to create this capability, most organizations are not doing this right now. I have set up two centers of excellence for clinical trial design and each time I have done it on my own. I can’t help thinking that if I’d had access at the time to a partner like ZS it would have gone so much faster and been much easier. I could have learned from other people’s mistakes instead of having to make them all myself.
SD: What's the best way to construct a financial justification to set up this advanced centralized design capability?
RS: Take three or four big studies, look at the design and show the impact of reducing the number of visits, reducing the number of outcome measures and reducing the number of labs. You will be amazed at what removing one visit and one lab draw will save you. Another option is to assume enrollment is quicker by 10%, or the study runs for a shorter period or that 10% fewer sites are needed to meet the same time frame. This will reduce monitoring budgets and the costs of data analysis. A 10%-savings on the execution phase will more than pay for the cost of an advanced design capability.
SS: Now that we have discussed the symptoms of a suboptimal design function and what it takes to build a centralized capability, what is the ideal future state of trial design?
RS: For me it is that every study will be conducted after using a robust design process. Every study's execution—from the first idea to data available at the end—will have been modeled in a virtual environment first. In an ideal future state:
- Enrollment proceeds very close to original projections.
- No amendments and no additional enrollment resources, such as advertising, extra sites or extra site visits, are added during the study.
- Sites and patients are enthusiastic about participation and protocol garners more than its fair share of available patients at site.
- Patient dropout rates are low.
- Proportion of nonperforming or poorly performing sites is low.
- Trial results closely match original assumptions in terms of efficacy, background medications, background safety event rates and patient population characteristics.
It is mostly about never having unpleasant surprises along the way, either in study execution or the final results and having a reputation for consistent delivery.
SS: This has been great. Thank you, Rob.