Why should we correct for measurement errors?

Normally, we only have one measure for each concept involved in a study. If we had different questions for the same concept, then it is conceivable that it might make quite a difference whether the one or the other formulation of a survey question is used. The differences in the responses to different formulations of the same concept suggest that at least some of the responses to these questions contain errors, but also that all questions can contain errors. That is why we say that we should correct for measurement errors in order to be able to trust the results from survey research.

We can illustrate this approach using the dependent variable in our example (i.e. satisfaction with democracy), presented in Table 0.1 of the introduction to this module. Together with two other concepts, this concept was measured in three different ways (i.e. methods) in the pilot study of ESS Round 1. The purpose was to find out which form provided the best quality and to decide which form to use in the ESS Round 1 Main Questionnaire. Thus, although the question itself was exactly the same and it measured the same concepts each time (i.e. satisfaction with the economy, satisfaction with government and satisfaction with the way democracy functions), the respondents had to express their responses on three different scales. A comparison was made between a bipolar 4-point scale, an 11-point partially labelled bipolar scale and a 4-point unipolar scale. The questions and their scales are presented in Table 1.1.

Table 1.1: The study of the effects of the response scale in the ESS pilot study (2001)

Combining these three questions with the three methods, results in nine different questions. This is a typical example of a Multitrait – Multimethod (MTMM) experiment [Cam59] that is carried out in each round of the ESS for the purpose of evaluating the quality of the questions. These questions were put to a sample of 485 British individuals.

In this study, the topic of the survey items was the same across all methods. Moreover, the concept measured (i.e. a feeling of satisfaction) is held constant. Only the way in which the respondents are asked to express their feelings varies. The first and third methods use a 4-point scale, while the second method uses an 11-point scale. This also means that the second method has a midpoint on the scale, while the other two do not. Furthermore, the first and second methods use a bipolar scale, while the third method uses a unipolar scale. In addition, the direction of the response categories changes in the first method compared to the second and the third methods. The first and third methods have completely labelled categories, while the second method only has labels at the endpoints of the scale.

There are other aspects in which the requests are similar, although they could have been different. For example, in Table 1.1, direct requests have been selected for the study. However, it is very common in survey research to specify a general request such as ‘How satisfied are you with the following aspects of society?’ followed by the provision of stimuli, such as the present economic situation, the national government, and the way democracy functions. Furthermore, all three requests are unbalanced, asking ‘how satisfied’ people are without mentioning the possibility of dissatisfaction. They have no explicit ‘don’t know’ option, and all three have no introduction or subordinate clauses, making the survey items relatively short. For many other characteristics of questions that can vary, we refer to [Sar07].

Go to next page >>

References