Roberts, C and Allum, N and Sturgis, P (2014) Nonresponse and measurement error in an online panel. In: Online Panel Research. John Wiley & Sons, Ltd, pp. 337-362. ISBN 9781118763520. Official URL: http://dx.doi.org/10.1002/9781118763520.ch15
Roberts, C and Allum, N and Sturgis, P (2014) Nonresponse and measurement error in an online panel. In: Online Panel Research. John Wiley & Sons, Ltd, pp. 337-362. ISBN 9781118763520. Official URL: http://dx.doi.org/10.1002/9781118763520.ch15
Roberts, C and Allum, N and Sturgis, P (2014) Nonresponse and measurement error in an online panel. In: Online Panel Research. John Wiley & Sons, Ltd, pp. 337-362. ISBN 9781118763520. Official URL: http://dx.doi.org/10.1002/9781118763520.ch15
Abstract
Non-sampling errors, and in particular, those arising from non-response and the measurement process itself present a particular challenge to survey methodologists, because it is not always easy to disentangle their joint effects on the data. Given that factors influencing the decision to participate in a survey may also influence the respondents' motivation and ability to respond to the survey questions, variations in the quality of responses may simultaneously be caused by both non-response bias and measurement error. In this study, we examine factors underlying both kinds of error using data from the 2008 ANES Internet Panel. Using interview and paradata from the initial recruitment survey, we investigate the relationship between recruitment effort (e.g. number of contact attempts; use of refusal conversion efforts), willingness to participate in subsequent panel waves, and the ability and motivation to optimize during questionnaire completion. We find that respondents who were hardest to reach or persuade to participate in the recruitment interview responded to fewer monthly panel surveys overall, and were more likely to stop participating in the panel altogether. They also had higher rates of item non-response in the recruitment interview. Respondents who later stopped participating in the panel were also more likely to give answers of reduced quality in the wave 1 monthly survey (e.g. more midpoint answers, less differentiation between scale points for question batteries, and fewer responses to a check-all-that-apply question format). We then investigated two potential common causes of the observed relation between the propensity to stop participating in the panel and response quality (interest in computers, and ‘need to evaluate’), but neither one fully accounted for it. Interest in computers predicted later panel cooperativeness, while need to evaluate was related both to response quality and the propensity to attrit. Finally, we look at whether the panelists most likely to stop participating in the panel are also more likely to learn shortcutting strategies over time, to reduce the effort needed to complete monthly surveys. We find some support for this hypothesis. We discuss our findings and their implications for the design of future online panels.
Item Type: | Book Section |
---|---|
Uncontrolled Keywords: | Probability-based panel; total survey error; measurement error; panel attrition; common cause; satisficing; panel conditioning; level of effort; paradata; non-differentiation; midpoint responding; item non-response; check-all-that-apply; need to evaluate; shortcutting |
Subjects: | H Social Sciences > HM Sociology |
Divisions: | Faculty of Social Sciences Faculty of Social Sciences > Sociology and Criminology, Department of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 17 Nov 2014 10:51 |
Last Modified: | 16 May 2024 17:31 |
URI: | http://repository.essex.ac.uk/id/eprint/11411 |