# The random coefficient model

This model is also known as the random slope model. Again, it can be represented by one level 1 and several level 2 equations, depending upon the number of random coefficients. It is not advisable to define all level 1 explanatory variables as random at once, i.e. free to vary across the level 2 units. The most important reason for this is that the number of random parameters to be estimated can increase dramatically, as we will see later. The second good reason for this is that the model may not converge.

The variance component model can be changed into a random coefficient model by adding a j-subscript to the regression coefficient in the level 1 equation and adding a second level 2 equation, with the regression coefficient ( slope) as the dependent variable.

Yij = β0j + β1j*Xij + eij
β0j = β0 + u0j
β1j = β1 + u1j

The model can be expressed in a single equation:

Yij = β0 + β1*Xij + (u0j + u1j*Xij + eij)

Note that the single residual term from the OLS regression model is now replaced by the three terms in the parenthesis. The new term includes the explanatory variable (X), which has the effect of creating heteroscedasticity since the level 2 residual variance will be a function of X.

The two level 2 residuals are assumed to be normally distributed with the expectation of zero and with a covariance matrix:

and eij ∼ N(0,σe2)

The new term is the covariance between the intercept and the slope residuals, and it is an additional parameter in the model. If several covariates are added to the model and set as free to vary, this will rapidly increase the number of random parameters. This also depends on assumptions about the covariances, a topic we will return to later. The random coefficient model will result in regression lines that may cross each other, as illustrated below:

Figure 3.4. Illustration of crossing regression lines using the random coefficient model

### Adding explanatory variables at both levels

Let us add X2 to the individual level model and Z to the level 2 model.

Yij = β0j + β1j*X1ij + β2j*X2ij + eij
β0j = β0 + β3*Zj + u0j
β1j = β1 + β4*Zj + u1j

The one-equation version results from substituting the two random coefficients in the level 1 equation by the right-hand sides of equations 2 and 3:

Yij = (β0 + Zj + u0j) + (β1 + Zj + u1j)*X1ij + β2j*X2ij + eij

Rearranging results in this version, with a parenthesis added to show the random part of the model:

Yij = β0 + β1*X1ij + β2j*X2ij + β3*Zj + β4*X1ij*Zj + (u0j + u1j*X1ij + eij)

In this model, the level 2 variable Z can have two effects. First, it may influence the intercept as well as the regression coefficient of X1, as can best be seen in the three-equation version of the model. The one-equation version, however, clearly shows that the latter effect is a cross-level interaction.

The random part of the model is unchanged, however. Let us see the result of defining the regression coefficient of X2 as random, i.e. free to vary among level 2 units. Now, the three-equation version will have an additional equation:

Yij = β0j + β1j*X1ij + β2j*X2ij + eij
β0j = β0 + β3*Zj + u0j
β1j = β1 + β4*Zj + u1j
β2j = β2 + β5*Zj + u2j

We substitute as above:

Yij = (β0 + β3*Zj + u0j) + (β1 + β4*Zj + u1j)*X1ij + (β2 + β5*Zj + u2j)*X2ij + eij

Rearranging and putting a parenthesis around the random part yields:

Yij = β0 + β1*X1ij + β2*X2ij + β3*Zj + β4*X1ij*Zj + β5*X2ij*Zj + (u0j + u1j*X1ij + u2j*X2ij + eij)

We now have two cross-level interactions and a more complex random model. The full set of random coefficients at level two is as follows:

Figure 3.5. The full set of random coefficients at level two

The three variances are in the main diagonal, and the off-diagonal elements are covariances among the level 2 residuals. By adding one random coefficient, the number of random parameters has doubled from three to six. A model with five random coefficients results in 15 random parameters. This may be too much for the software to handle and is not advisable, especially in situations with a limited number of level 2 units. The general advice is to add complexities, such as additional random parameters, one by one and to keep those that appear to improve upon simpler models in the final model.

This problem is handled by default in SPSS and Stata by only estimating the parameters in the main diagonal, which are normally the most interesting ones, assuming the level 2 residuals to be uncorrelated, whereas Mlwin, by default, will estimate all. In the latter program, it is possible to simplify the model by defining one or more of the random parameters to be zero, and in SPSS and Stata, it is possible to relax the default restriction in various ways. To estimate all random parameters, the covariance matrix has to be defined as unstructured.

Go to next page >>