Menu
Synergy in Action

Publications

Publication Guidelines

ISCTM: Implementing Phase 2 Dose Finding Adaptive Clinical Trials

T Parke
European Neuropsychopharmacology Volume 21, Issue 2, February 2011
Adaptive clinical trial designs offer significant opportunities to optimize the conduct of clinical trials for the benefit of the subjects in the trial, the subjects that may be treated after the trial and the trial sponsor. However currently, the use of adaptive designs is limited, due to statistical, regulatory and logistical concerns. In this article we share our experience of overcoming the last of these over a range of phase 2, response adaptive, dose finding studies. Based on our experience we feel quite strongly that the logistics of executing adaptive trials should not be a barrier to their use.

Issues and Perspectives in Designing Clinical Trials for Negative Symptoms in Schizophrenia

SR Marder, L Alphs, I Angheles , CArango, T Barnes, I Caers, D Daniel, E Duneyevich, W Fleischhacker, G Garibaldi, M Green,P Harvey, R Kahn, J Kane R Keefe, B Kinon, S Leucht, JP Lindenmayer, A Malhotra, V Stauffer, D Umbricht, K Wesnes, S Kapur, J Rabinowitz
Schizophrenia Research Journal/article/S0920-9964(13)00447-7
A number of pharmacological agents for treating negative symptoms in schizophrenia are currently in development. Unresolved questions regarding the design of clinical trials in this area were discussed at an international meeting in Florence, Italy in April 2012. Participants included representatives from academia, the pharmaceutical industry, and the European Medicines Agency (EMA). Prior to the meeting, participants submitted key questions for debate and discussion. Responses to the questions guided the discussion during the meeting. The group reached agreement on a number of issues: (1) study subjects should be under the age of 65; (2) subjects should be excluded for symptoms of depression that do not overlap with negative symptoms; (3) functional measures should not be required as a co-primary in negative symptom trials; (4) information from informants should be included for ratings when available; (5) Phase 2 negative symptom trials should be 12 weeks and 26 weeks is preferred for Phase 3 trials; (6) prior to entry into a negative symptom study, subjects should demonstrate clinical stability for a period of 4 to 6 months by collection of retrospective information; and (7) prior to entry, the stability of negative and positive symptoms should be confirmed prospectively for four weeks or longer. The participants could not reach agreement on whether predominant or prominent negative symptoms should be required for study subjects.

Attrition in Randomized Controlled Clinical Trials: Methodological Issues in Psychopharmacology (2005 Conference, Montreal)

AC Leon, CH Mallinckrodt, C Chuang-Stein, DG Archibald, GE Archer, K Chartier
Biological Psychiatry, 2006; 59:1001-1005. PMID: 16503329
Attrition is a ubiquitous problem in randomized controlled clinical trials (RCT) of psychotropic agents that can cause biased estimates of the treatment effect, reduce statistical power, and restrict the generalizability of results. The extent of the problem of attrition in central nervous system (CNS) trials is considered here and its consequences are examined. The taxonomy of missingness mechanisms is then briefly reviewed in order to introduce issues underlying the choice of data analytic strategies appropriate for RCTs with various forms of incomplete data. The convention of using last observation carried forward to accommodate attrition is discouraged because its assumptions are typically inappropriate for CNS RCTs, whereas multiple imputation strategies are more appropriate. Mixed-effects models often provide a useful data analytic strategy for attrition as do the pattern-mixture and propensity adjustments. Finally, investigators are encouraged to consider asking participants, at each assessment session, the likelihood of attendance at the subsequent assessment session. This information can be used to eliminate some of the very obstacles that lead to attrition, and can be incorporated in data analyses to reduce bias, but it will not eliminate all attrition bias.

Bias Reduction With an Adjustment for Participants’ Intent to Dropout of a Randomized Controlled Clinical Trial

AC Leon, H Demirtas, D Hedeker
Cliinical Trials. 2007;4(5):540-7.PMID: 17942469
BACKGROUND: Attrition, which is virtually ubiquitous in randomized controlled clinical trials, introduces problems of increased bias and reduced statistical power. Although likelihood-based statistical models such as mixed-effects models can accommodate incomplete data, the assumption of ignorable attrition is usually required for valid inferences.
PURPOSE: In an effort to make the ignorability assumption more plausible, we consider the value of one readily obtained covariate that has been recommended by others, asking participants to rate their Intent to Attend the next assessment session.
METHODS: Here we present a simulation study that compares the bias and coverage in mixed-effects outcome analyses that do and do not include Intent to Attend as a covariate.
RESULTS: For the simulation specifications that we examined, the results are promising in the sense of reduced bias and greater precision. Specifically, if the time-varying Intent to Attend variable is associated with attrition, outcome and treatment group, bias is substantially reduced by including it in the outcome analyses.
LIMITATIONS: Analyses that are adjusted in this way will only yield unbiased estimates of efficacy if attrition is ignorable based on the self-rated intentions.
CONCLUSIONS: Accounting for participants' Intent to Attend the next assessment session will reduce attrition bias under conditions examined here. The item adds little burden and can be used both for data analyses and to identify participants at risk of attrition.

Implications of Clinical Trial Design on Sample Size Requirements

AC Leon
Schizophrenia Bulletin. 2008 Jul;34(4):664-9. Epub 2008 May 9. Review.PMID: 18469326
The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two particularly relevant aspects of such a design often receive insufficient attention in an RCT. Multiple outcomes inflate type I error, and an unreliable assessment process introduces bias and reduces statistical power. Here we describe how both unreliability and multiple outcomes can increase the study costs and duration and reduce the feasibility of the study. The objective of this article is to consider strategies that overcome the problems of unreliability and multiplicity.

Enhancing Clinical Trial Design of Interventions for Posttraumatic Stress Disorder

AC Leon, L Davis
Journal of Traumatic Stress. 2009 Dec;22(6):603-11.PMID: 19902462
The 2008 Institute of Medicine review of interventions research for posttraumatic stress disorder (PTSD) concluded that new, well-designed studies are needed to evaluate the efficacy of treatments for PTSD. The Department of Veterans Affairs (VA), the Department of Defense, and the National Institute of Mental Health convened a meeting on research methodology and the VA issued recommendations for design and analysis of randomized controlled clinical trials (RCTs) for PTSD. The rationale that formed the basis for several of the components of the recommendations is discussed here. Fundamental goals of RCT design are described. Strategies in design and analysis that contribute to the goals of an RCT and thereby enhance the likelihood of signal detection are considered.

The Role and Interpretation of Pilot Studies in Clinical Research

AC Leon, L Davis, H Kraemer
J Psychiatr Res. 2011 May;45(5):626-9. doi: 10.1016/j.jpsychires.2010.10.008. Epub 2010 Oct 28.
Pilot studies represent a fundamental phase of the research process. The purpose of conducting a pilot study is to examine the feasibility of an approach that is intended to be used in a larger scale study. The roles and limitations of pilot studies are described here using a clinical trial as an example. A pilot study can be used to evaluate the feasibility of recruitment, randomization, retention, assessment procedures, new methods, and implementation of the novel intervention. A pilot study is not a hypothesis testing study. Safety, efficacy and effectiveness are not evaluated in a pilot. Contrary to tradition, a pilot study does not provide a meaningful effect size estimate for planning subsequent studies due to the imprecision inherent in data from small samples. Feasibility results do not necessarily generalize beyond the inclusion and exclusion criteria of the pilot design. A pilot study is a requisite initial step in exploring a novel intervention or an innovative application of an intervention. Pilot results can inform feasibility and identify modifications needed in the design of a larger, ensuing hypothesis testing study. Investigators should be forthright in stating these objectives of a pilot study. Grant reviewers and other stakeholders should expect no more.

Comparative Effectiveness Clinical Trials in Psychiatry: Superiority, Non-inferiority and the Role of Active Comparators

AC Leon
Journal of Clinical Psychiatry 2011;72(10):1344–1349 10.4088/JCP.10m06089whi
The Agency for Healthcare Research and Quality, part of the US Department of Health and Human Services, has issued several Requests for Applications to conduct comparative effectiveness research (CER). Many of the applications will involve randomized controlled clinical trials that include an active comparator. The inclusion of an active comparator has implications for clinical trial design.

Despite a common misperception, a clinical trial result of no significant difference between active treatment groups does not imply equivalence or noninferiority. A noninferiority trial, on the other hand, can directly test whether one active treatment group is noninferior to the other. For example, noninferiority of an inexpensive generic could be tested in comparison with a novel, more costly intervention. Although seldom used in psychiatry, noninferiority clinical trials could play a fundamental role in CER. Features of noninferiority and the nearly ubiquitous superiority designs are contrasted. The noninferiority margin is defined and its application and interpretation are discussed.

Evidence of noninferiority can only come from well-designed and conducted noninferiority CER. Sample sizes needed in noninferiority trials and in superiority trials that include an active comparator are substantially larger than those needed in trials that can utilize a placebo control in their scientific design. As a result, trials with active comparators are more costly, require longer recruitment duration, and expose more participants to the risks of an experiment than do trials in which the only comparator is placebo.

Two Clinical Trial Designs to Examine Personalized Treatments for Psychiatric Disorders (2009 Scientific Meeting, Arlington)

AC Leon
Journal of Clinical Psychiatry. 2011 May;72(5):593-7. Epub 2010 Jul 13
The National Institute of Mental Health Strategic Plan calls for the development of personalized treatment strategies for mental disorders. In an effort to achieve that goal, several investigators have conducted exploratory analyses of randomized controlled clinical trial (RCT) data to examine the association between baseline subject characteristics, the putative moderators, and the magnitude of treatment effect sizes. Exploratory analyses are used to generate hypotheses, not to confirm them. For that reason, independent replication is needed. Here, 2 general approaches to designing confirmatory RCTs are described that build on the results of exploratory analyses. These approaches address distinct questions. For example, a 2 × 2 factorial design provides an empirical test of the question, “Is there a greater treatment effect for those with the single-nucleotide polymorphism than for those without that polymorphism?” and the hypothesis test involves a moderator-by-treatment interaction. In contrast, a main effects strategy evaluates the intervention in subgroups and involves separate hypothesis-testing studies of treatment for subjects with the genotypes hypothesized to have enhanced and adverse response. These designs require widely disparate sample sizes to detect a given effect size. The former could need as many as 4-fold the number of subjects. As such, the choice of design impacts the research costs, clinical trial duration, and number of subjects exposed to risk of an experiment, as well as the generalizability of results. When resources are abundant, the 2 × 2 design is the preferable approach for identifying personalized interventions because it directly tests the differential treatment effect, but its demand on research funds is extraordinary.