Tutorial: Regression Modelling Strategies

Frank E Harrell Jr, Dept. of Biostatistics, Vanderbilt University School of Medicine, Nashville, TN, USA, f.harrell@vanderbilt.edu.


Statisticians and persons from other quantitative disciplines who are interested in multivariable regression analysis of univariate responses, in developing, validating, and graphically describing multivariable predictive models. The course will be of particular interest to:


A good general knowledge of statistical estimation and inference methods and a good command of ordinary linear regression. Those who want to run the laboratory exercises themselves or who want to use R to use the methods taught in this course in their everyday work should have had a previous introduction to R. Participants are encouraged to read references [1, 2, 3] in advance.

Course Description

The first part of the course presents the following elements of multivariable predictive modeling for a single response variable: using regression splines to relax linearity assumptions, perils of variable selection and overfitting, where to spend degrees of freedom, shrinkage, imputation of missing data, data reduction, and interaction surfaces. Then a default overall modeling strategy will be described. This is followed by methods for graphically understanding models (e.g., using nomograms) and using re-sampling to estimate a model's likely performance on new data. Then the freely available R Design package will be overviewed. Design facilitates most of the steps of the modeling process. Two of the following three case studies will be presented: an interactive exploration of the survival status of Titanic passengers, an interactive case study in developing a survival time model for critically ill patients, and a case study in Cox regression.

The methods covered in this course will apply to almost any regression model, including ordinary least squares, logistic regression models, and survival models.


  1. Be familiar with modern methods for fitting multivariable regression models:
    1. accurately
    2. in a way the sample size will allow, without overfitting
    3. uncovering complex non-linear or non-additive relationships
    4. testing for and quantifying the association between one or more predictors and the response, with possible adjustment for other factors
  2. Be able to validate models for predictive accuracy and to detect overfitting
  3. Be able to interpret fitted models using both parameter estimates and graphics
  4. Be able to critique the literature to detect models that are likely to be unreliable


  1. Planning for Modeling
  2. Notation for Regression Models
  3. Interpreting Model Parameters
    1. Nominal Predictors
    2. Interactions
  4. Relaxing Linearity Assumption for Continuous Predictors
    1. Simple Nonlinear Terms
    2. Splines for Estimating Shape of Regression Function and Determining Predictor Transformations
    3. Cubic Spline Functions
    4. Restricted Cubic Splines
    5. Nonparametric regression
    6. Advantages of Splines over Other Methods
  5. Tests of Association
  6. Assessment of Model Fit
    1. Regression Assumptions
    2. Modeling and Testing Interactions
  7. Missing Data
    1. Types of Missingness
    2. Understanding Patterns of Missing Values
    3. Problems with Simple Alternatives to Imputation
    4. Strategies for Developing Imputation Algorithms
    5. Single Conditional Mean Imputation
    6. Multiple Imputation
    7. R Software for Fitting Models and Adjusting Variances for Multiple Imputation
  8. Multivariable Modeling Strategy
    1. Pre-Specification of Predictor Complexity
    2. Variable Selection
    3. Overfitting and Limits on Number of Predictors
    4. Shrinkage
    5. Data Reduction
  9. Resampling, Validating, Describing, and Simplifying the Model
    1. The Bootstrap
    2. Model Validation
    3. Graphically Describing the Fitted Model
    4. Simplifying the Model by Approximating It
  10. R Design package
  11. Interactive Case Study: Binary Logistic Model for Survival of Titanic Passengers
    1. Missing Data
    2. Nonparametric Regression
    3. Development of Logistic Model
    4. Multiple Imputation to Handle Missing Passenger Ages
  12. Interactive Case Study: Development of a Long-Term Survival Model for Critically Ill Patients
  13. Case Study in Cox Regression
    1. Choosing the Number of Parameters
    2. Checking Proportional Hazards
    3. Testing Interactions
    4. Describing Predictor Effects
    5. Validating the Model
    6. Presenting the Model


Dr. Harrell is Professor of Biostatistics and Statistics at the Dept. of Biostatistics, Vanderbilt University School of Medicine, Nashville, TN. He received his Ph.D. in biostatistics from the University of North Carolina, Chapel Hill in 1979, where he studied under P.K. Sen. Dr. Harrell has been involved in statistical computing since 1969 and is the author of many R functions and SAS procedures. Since 1973 he has been involved in medical applications of statistics, especially in the area of survival analysis and clinical prediction modeling. He is an editorial consultant for the Journal of Clinical Epidemiology, is on the editorial board of Statistics in Medicine, is co-managing editor of the new journal Health Services and Outcomes Research Methodology, and is a consultant to FDA.


Participants will receive copies of the 180 slides that will be presented and a copy of the 500-page book manuscript on which the course is based, Regression Modeling Strategies written by the instructor. See http://hesweb1.med.virginia.edu/biostat/rms for information about this text.


Regression models are frequently used to develop diagnostic, prognostic, and health resource utilization models in clinical, health services, outcomes, pharmacoeconomic, and epidemiologic research, and in a multitude of non-health-related areas. Regression models are also used to adjust for patient heterogeneity in randomized clinical trials, to obtain tests that are more powerful and valid than unadjusted treatment comparisons.

Models must be flexible enough to fit nonlinear and non-additive relationships, but unless the sample size is enormous, the approach to modeling must avoid common problems with data mining or data dredging that result in overfitting and a failure of the predictive model to validate on new subjects.

All standard regression models have assumptions that must be verified for the model to have power to test hypotheses and for it to be able to predict accurately. Of the principal assumptions (linearity, additivity, distributional), this short course will emphasize methods for assessing and satisfying the first two. Practical but powerful tools are presented for validating model assumptions and presenting model results. This course provides methods for estimating the shape of the relationship between predictors and response using the widely applicable method of augmenting the design matrix using restricted cubic splines.


F. E. Harrell, K. L. Lee, and D. B. Mark. Multivariable prognostic models: Issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Statistics in Medicine, 15:361--387, 1996.
F. E. Harrell, P. A. Margolis, S. Gove, K. E. Mason, E. K. Mulholland, D. Lehmann, L. Muhe, S. Gatchalian, and H. F. Eichenwald. Development of a clinical prediction model for an ordinal outcome: The World Health Organization ARI Multicentre Study of clinical signs and etiologic agents of pneumonia, sepsis, and meningitis in young infants. Statistics in Medicine, 17:909--944, 1998.
A. Spanos, F. E. Harrell, and D. T. Durack. Differential diagnosis of acute meningitis: An analysis of the predictive value of initial observations. Journal of the American Medical Association, 262:2700--2707, 1989.