Lilla Di Scala (Actelion) | “Informative censoring in a rare disease: a regulatory experience in PAH” | There is general awareness of the risks of informative censoring, e.g. when data from patients without follow up beyond end of treatment is not independent of the underlying disease process, therefore introducing bias. However little is present to inform how to mitigate this problem. This leads to a challenge in the context of the statistical and clinical interpretation of study data (Fleming 2009, DeMets 2012). This phenomenon is particularly of note in rare diseases, as study sizes need to be feasible. The sample size and the low event rate, in a time-to-event context, lead to the choice of composite endpoints as primary endpoints and it is likely that the effect of treatment is not uniform across all components. The study design for a potentially life-saving drug in a rare disease can further enhance the informative censoring phenomenon due to the extent (or lack) of post-treatment follow-up. This is a case study using statistical simulations for overcoming the challenges of informative censoring and explaining the potential imbalances across components. |
Dominic Magirr (Astra Zeneca) | “Unblinded sample-size reassessment in time-to-event clinical trials” | Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. Unfortunately, the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times, and that this leads to inefficiency compared to alternative sample size re-estimation strategies. |
Tobias Bluhmki (Uni Ulm, Germany), Claudia Schmoor (Uniklinik Freiburg, Germany) and Jan Beyersmann ( Uni Ulm, Germany) | “Analyzing non-monotonous time-to-event outcome probabilities in randomized clinical trials” | Common time-to-event efficacy endpoints in randomized clinical trials including leukemia patients after allogeneic stem-cell transplantation (ASCT) are, for instance, overall-survival or the incidence of graft-versus-host-disease (GvHD). Another important measure to assess treatment success is the time undergoing immunosuppressive therapy (IST). Since patients can be subject to multiple episodes of IST during follow-up, standard survival techniques cannot be applied. Instead, more advanced multistate techniques should be used for the analysis. The probabilities of interest are estimated by the Aalen-Johansen estimate, but the methodological complexity is that they are non-monotonous curves in time. In order to still perform a formal statistical treatment comparison, we propose a convenient resampling technique to derive time-simultaneous confidence bands. These bands adequately account for the statistical uncertainty arising in probability estimation.
The method is applied to a recently published study to compare standard GvHD prophylaxis plus pretransplant Grafalon (formerly ATG-Fresenius S = ATG-F) with standard GvHD prophylaxis alone [1-3].
[1] Finke J., Bethge W., Schmoor C., Ottinger H.D, et al. Standard graft-versus-host disease prophylaxis with or without anti-T-cell globulin in haematopoietic cell transplantation from matchd unrelated donors: a randomised, open-label, multicentre phase 3 trial. Lancet Oncol 2009, 10:855-64.
[2] Socié G, Schmoor C, Bethge WA et al. Chronic Graft-Versus-Host Disease: Long term results from a Randomized Trial on GvHD Prophylaxis with or without Anti-T-Cell Globulin ATG-Fresenius. Blood 2011, 117:6375-82.
[3] Schmoor C., Schumacher M., Finke J., Beyersmann J. Competing Risks and Multistate Models. Clin Cancer Res 2013, 19(1): 12-21. |
Jennifer Rogers (University of Oxford) | “The analysis of recurrent Events: A summary of methodology” | Many chronic diseases are characterised by nonfatal recurrent events. Examples of such include asthma attacks in asthma, epileptic seizures in epilepsy and hospitalisations for worsening condition in heart failure. Analysing all of these repeat events within individuals is more representative of disease progression and more accurately estimates the effect of treatment on the true burden of disease. This talk will start by outlining the different methods that are available for analysing recurrent event data. We shall illustrate and compare various methods of analysing data on repeat hospitalisations using simulated data and data from major trials in heart failure. |
Mouna Akacha (Novartis Pharma AG) | "Recurrent event data endpoints in chronic heart failure studies: What is the estimand of interest?” | Heart failure (HF) is a common and serious global health problem affecting approximately 2% of adults in developing countries. Good news is that with new treatments on the market, HF has been converted from a short-term and quickly fatal condition to a chronic disease, which is characterized by recurrent non-fatal events (HF hospitalizations) and relatively low disease-related mortality. Classical heart failure trials have used a composite primary endpoint of cardiovascular (CV) death and HF hospitalization. This endpoint was then analyzed using a ‘time to first composite event’ analysis.
Various limitations of this endpoint have been raised in recent years. Among others, the ‘time to first composite event’ endpoint is thought to not fully capture the disease burden as it ignores all events that occur after the first event. Given that a number of recent large HF outcome trials have failed to show a clinical benefit for patients using the traditional endpoint, clinical teams are reviewing novel endpoints and estimands that better capture clinical benefit and which adapt to the changing disease profile.
In this talk, we will discuss different estimands for this setting and touch upon the estimation of those estimands |
Ekkehard Glimm (Novartis Pharma AG) | “ Group-sequential and adaptive designs for recurrent event data” | In this talk, we will investigate methods for the analysis of recurrent event data in group-sequential and adaptive trial designs. The talk will consider several ways of approximating the information fraction and the joint distribution of test statistics for the treatment effect in trials with interim analyses when the response is related to recurrent events. We will look at different ways of condensing the individual data (the complete treatment history and the complete split of the individual’s observation period into the sojourn times in states like “healthy”, “progressed”, “in hospital” etc.), either simply by counting the number of recurrent events in an observation period or by the total time spent in a certain state. Overdispersed (piecewise) Poisson processes leading to a negative-binomial distribution are a useful tool for such analyses. We will review normal approximations for the event rate estimates in such data, investigate how the usual approximations can be refined by incorporating information about the actual individual observation times and investigate via simulations how these approximations hold up in finite sample cases and in situations where the reality deviates from assumptions (for example because event rates are non-constant). |