Konietschke Lab – Statistical Methods of Translation and Early Clinical Studies
Translational research encompasses all interdisciplinary requirements involved in the implementation of preclinical research in the clinical development process. The preclinical study results obtained and especially the statistical evaluations must be of high quality in order to provide valid building blocks for the clinical research. The relevant animal ethics committees attach great importance to adequate biometric planning and execution. While many statistical methods and study designs were developed more for prospective clinical studies with large sample sizes, basic research is faced with special biometric requirements that arise from the mostly very small sample sizes. However, in the case of small sample sizes, many statistical procedures do not control the level of significance and therefore normally cannot be used in translational research. Our research group thus aims to develop statistical methods that produce valid conclusions even with very small sample sizes. Moreover, preclinical tests are often based on complex test designs that seek to obtain precise answers to specific research questions. The main area of our methodological work deals with the development of ranking methods for factorial models, of longitudinal and high-dimensional data, as well as of resampling techniques. Typical areas of application include preclinical and early clinical studies, diagnostic studies, rare disease research, and personalized medicine.
Main research areas and projects
Multiple contrast tests and simultaneous confidence intervals
Many tests and studies include more than two different test groups. In statistical practice, such test systems are usually analyzed in three steps:
The first step involves conducting a global test to determine whether there are any differences between the groups, such as in the analysis of variance (ANOVA). In the second step, several pair comparison tests are performed in order to more precisely localize the difference, whereby the level of significance is adjusted for the number of tests. However, the test decisions from steps 1 and 2 do not necessarily have to be consistent: It is possible that the global test shows a significant variance, but the pair comparison test does not (or vice versa).
In order to take into account the dispersion of data, in the final step, (simultaneous) confidence intervals are defined for the effects, which can also lead to contradictory results. For example, a pair comparison test can show a significant variance even if the associated confidence interval contains the "null effect."
This means that statistical evaluations are neither coherent nor compatible according to the classical step-by-step analyses described above. In particular, it is often the case that the global hypothesis in the first step does not correspond to the researchers’ main question, but rather to local test decisions and p-values.
Against this backdrop, we develop so-called multiple contrast tests and simultaneous confidence intervals. These methods are designed so that the first step does not start with data analysis, but rather with specific pair comparisons derived from the main question being addressed by the test. We test the global hypothesis as a “logical component”: If a pair comparison is significant, then so is the global test. The methods that we have established are therefore consistent and compatible and have already been used in a large number of studies.
In order to calculate p-values, the probability distribution of test statistics must be known or defined approximately through simulations. We develop so-called resampling and permutation methods that allow us to obtain accurate approximations of such distributions especially for small samples, while foregoing assumptions such as exchangeable data and equal variances.
Gunawardana, A., Konietschke, F. (2019). Nonparametric multiple contrast tests for general multivariate factorial designs. Journal of Multivariate Analysis. In Press.
Konietschke, F., Friede, T., Pauly, M. (2018). Semi‐parametric analysis of overdispersed count and metric data with varying follow‐up times: Asymptotic theory and small sample approximations. Biometrical Journal. In Press.
Konietschke, F., Aguayo, R. R., Staab, W. (2018). Simultaneous inference for factorial multireader diagnostic trials. Statistics in medicine, 37(1), 28-47.
Brunner, E., Konietschke, F., Pauly, M., Puri, M. L. (2017). Rank‐based procedures in factorial designs: hypotheses about non‐parametric treatment effects. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(5), 1463-1485.
Pauly, M., Brunner, E., Konietschke, F. (2015). Asymptotic permutation tests in general factorial designs. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 77(2), 461-473.
Konietschke, F., Bathke, A. C., Harrar, S. W., Pauly, M. (2015). Parametric and nonparametric bootstrap methods for general MANOVA. Journal of Multivariate Analysis, 140, 291-301.
Konietschke, F., Hothorn, L. A., Brunner, E. (2012). Rank-based multiple test procedures and simultaneous confidence intervals. Electronic Journal of Statistics, 6, 738-759.
Konietschke, F., Pauly, M. (2012). A studentized permutation test for the nonparametric Behrens-Fisher problem in paired data. Electronic Journal of Statistics, 6, 1358-1372.