A joint project of the Graduate School, Peabody College, and the Jean & Alexander Heard Library

Title page for ETD etd-03272017-091807


Type of Document Master's Thesis
Author Davis, Sharon Elizabeth
Author's Email Address sharon.e.davis@vanderbilt.edu
URN etd-03272017-091807
Title Performance Drift of Clinical Prediction Models: Impact of modeling methods on prospective model performance
Degree Master of Science
Department Biomedical Informatics
Advisory Committee
Advisor Name Title
Michael E Matheny Committee Chair
Guanhua Chen Committee Member
Thomas A Lasko Committee Member
Keywords
  • Clinical prediction
  • calibration drift
  • machine learning
Date of Defense 2017-02-17
Availability restricted
Abstract
Integrating personalized risk predictions into clinical decision support requires well-calibrated models, yet model accuracy deteriorates as patient populations shift. Understanding the influence of modeling methods on performance drift is essential for designing updating protocols. Using national cohorts of Department of Veterans Affairs hospital admissions, we compared the temporal performance of seven regression and machine learning models for hospital-acquired acute kidney injury and 30-day mortality after admission. All modeling methods were robust in terms of discrimination and experienced deteriorating calibration. Random forest and neural network models experienced lower levels of calibration drift than regressions. The L-2 penalized logistic regression for mortality demonstrated drift similar to the random forest. Increasing overprediction by all models correlated with declining event rates. Diverging patterns of calibration drift among acute kidney injury models coincided with predictor-outcome association changes. The mortality models revealed reduced susceptibility of random forest, neural network, and L-2 penalized logistic regression models to case mix-driven calibration drift. These findings support the advancement of clinical predictive analytics and lay a foundation for systems to maintain model accuracy. As calibration drift impacted each method, all clinical prediction models should be routinely reassessed and updated as needed. Regression models have a greater need for frequent evaluation and updating than machine learning models, highlighting the importance of tailoring updating protocols to variations in the susceptibility of models to patient population shifts. While the suite of best practices remains to be developed, modeling methods will be an essential component in determining when and how models are updated.
Files
  Filename       Size       Approximate Download Time (Hours:Minutes:Seconds) 
 
 28.8 Modem   56K Modem   ISDN (64 Kb)   ISDN (128 Kb)   Higher-speed Access 
[campus] SharonDavis.pdf 10.07 Mb 00:46:37 00:23:58 00:20:59 00:10:29 00:00:53
[campus] indicates that a file or directory is accessible from the campus network only.

Browse All Available ETDs by ( Author | Department )

If you have more questions or technical problems, please Contact LITS.