Chapter 2: Factor Analysis

Assessment of model fit

A factor analysis model implies a model for the variances and covariances of the observed indicators. This model is usually non-saturated, meaning that it has fewer parameters than there are distinct observed variances and covariances. This means that the model is parsimonious, but also that it will not exactly reproduce the observed variances and covariances. An exact fit for these quantities is provided by the "saturated model", which estimates them all individually by the simple sample variances and covariances of the observed items.

The goodness of fit of a factor analysis model can be assessed by comparing the estimated variance-covariance matrix implied by the model with the sample variance-covariance matrix from the saturated model. If the two are relatively similar, the factor analysis model is judged to fit well; if relatively different, the model is judged to fit poorly. Several different methods of model assessment may be used to make this comparison. Here we briefly discuss a few of them. More information on them and on other methods of model assessment can be found in, for example, [Kap08].

The chi-squared test of overall goodness of fit is (when ML estimation is used to fit the model) the likelihood ratio test between the fitted and the saturated models. It tests the null hypothesis that the factor analysis model fits the data. If the p-value of the test is smaller than α, the null hypothesis is rejected at a 100α% significance level and we conclude that the model does not fit the data.

A problem with the chi-squared test statistic is that it has high power, especially as the sample size increases. That means that it rejects models very easily, even when we otherwise conclude (e.g. by examining the differences between the fitted and sample correlations) that the apparent lack of fit is not very large in magnitude. This behaviour of the test has motivated the development of other tools of model assessment which are meant to be less sensitive to small amounts of lack of fit.

The Root Mean Square Error of Approximation (RMSEA) also quantifies a comparison between the fitted and saturated variance-covariance matrices. It takes non-negative values, such that values between 0 – 0.05 are (as a rule of thumb) considered to indicate good fit, 0.05 – 0.1 moderate fit, and values larger than 0.1 a bad fit.

The Comparative Fit Index (CFI) compares the fitted model to a "null model" which specifies that all of the observed indicators are uncorrelated with each other. It takes values between 0 and 1, with large values indicating better fit of the model. Values below 0.9 may be taken to indicate poor fit, and values above 0.95 a good fit.

The AIC and BIC "information criterion" statistics may also be used to compare any fitted models for the same observed items, for example models with 1 vs. 2 factors. For each of these statistics, models with smaller values of the statistic are preferred. A model with a small value of AIC or BIC is judged to have a good balance between goodness of fit and parsimony, i.e. that it achieves a reasonable goodness of fit with a reasonably small number of parameters.

Standard likelihood ratio (LR) tests can also be used to compare nested pairs of models for the same indicators, for example models with and without zero constraints on some parameters. In any such comparisons, the null hypothesis is that the smaller (more parsimonious) of the two models fits as well as the larger (less parsimonious) model. Not all pairs of models can be tested in this way; in particular, an LR test cannot be used to compare factor analysis models with different numbers of factors, such as a 1-factor model against a 2-factor model. With large samples, this test too is often sensitive to even small lack of fit.

Go to next page >>

References