TY - JOUR
T1 - Developing Clinical Prediction Models when adhering to minimum sample size recommendations: the importance of quantifying bootstrap variability in tuning parameters and predictive performance
AU - Martin, Glen
AU - Riley, Richard
AU - Collins, Gary S.
AU - Sperrin, Matthew
PY - 2021/8/13
Y1 - 2021/8/13
N2 - Recent minimum sample size formula (Riley et al.) for developing clinical prediction models (CPMs) help ensure that development datasets are of sufficient size to minimise overfitting. While these criteria are known to avoid excessive overfitting on average, the extent of variability in overfitting at recommended sample sizes is unknown. We investigated this through a simulation study and empirical example to develop logistic regression CPMs using unpenalised maximum likelihood estimation, and various post-estimation shrinkage or penalisation methods. While the mean calibration slope was close to the ideal value of one for all methods, penalization further reduced the level of overfitting, on average, compared to unpenalised methods. This came at the cost of higher variability in predictive performance for penalization methods in external data. We recommend that penalization methods are used in data that meet, or surpass, minimum sample size requirements to further mitigate overfitting, and that the variability in predictive performance and any tuning parameters should always be examined as part of the model development process, since this provides additional information over average (optimism-adjusted) performance alone. Lower variability would give reassurance that the developed CPM will perform well in new individuals from the same population as was used for model development.
AB - Recent minimum sample size formula (Riley et al.) for developing clinical prediction models (CPMs) help ensure that development datasets are of sufficient size to minimise overfitting. While these criteria are known to avoid excessive overfitting on average, the extent of variability in overfitting at recommended sample sizes is unknown. We investigated this through a simulation study and empirical example to develop logistic regression CPMs using unpenalised maximum likelihood estimation, and various post-estimation shrinkage or penalisation methods. While the mean calibration slope was close to the ideal value of one for all methods, penalization further reduced the level of overfitting, on average, compared to unpenalised methods. This came at the cost of higher variability in predictive performance for penalization methods in external data. We recommend that penalization methods are used in data that meet, or surpass, minimum sample size requirements to further mitigate overfitting, and that the variability in predictive performance and any tuning parameters should always be examined as part of the model development process, since this provides additional information over average (optimism-adjusted) performance alone. Lower variability would give reassurance that the developed CPM will perform well in new individuals from the same population as was used for model development.
M3 - Article
SN - 0962-2802
JO - Statistical Methods in Medical Research
JF - Statistical Methods in Medical Research
ER -