Research Repository

Nonresponse and measurement error in an online panel

Roberts, C and Allum, N and Sturgis, P (2014) 'Nonresponse and measurement error in an online panel.' In: UNSPECIFIED, (ed.) Online Panel Research. John Wiley & Sons, Ltd, 337 - 362. ISBN 9781118763520

Full text not available from this repository.


Non-sampling errors, and in particular, those arising from non-response and the measurement process itself present a particular challenge to survey methodologists, because it is not always easy to disentangle their joint effects on the data. Given that factors influencing the decision to participate in a survey may also influence the respondents' motivation and ability to respond to the survey questions, variations in the quality of responses may simultaneously be caused by both non-response bias and measurement error. In this study, we examine factors underlying both kinds of error using data from the 2008 ANES Internet Panel. Using interview and paradata from the initial recruitment survey, we investigate the relationship between recruitment effort (e.g. number of contact attempts; use of refusal conversion efforts), willingness to participate in subsequent panel waves, and the ability and motivation to optimize during questionnaire completion. We find that respondents who were hardest to reach or persuade to participate in the recruitment interview responded to fewer monthly panel surveys overall, and were more likely to stop participating in the panel altogether. They also had higher rates of item non-response in the recruitment interview. Respondents who later stopped participating in the panel were also more likely to give answers of reduced quality in the wave 1 monthly survey (e.g. more midpoint answers, less differentiation between scale points for question batteries, and fewer responses to a check-all-that-apply question format). We then investigated two potential common causes of the observed relation between the propensity to stop participating in the panel and response quality (interest in computers, and ‘need to evaluate’), but neither one fully accounted for it. Interest in computers predicted later panel cooperativeness, while need to evaluate was related both to response quality and the propensity to attrit. Finally, we look at whether the panelists most likely to stop participating in the panel are also more likely to learn shortcutting strategies over time, to reduce the effort needed to complete monthly surveys. We find some support for this hypothesis. We discuss our findings and their implications for the design of future online panels.

Item Type: Book Section
Uncontrolled Keywords: Probability-based panel, total survey error, measurement error, panel attrition, common cause, satisficing, panel conditioning, level of effort, paradata, non-differentiation, midpoint responding, item non-response, check-all-that-apply, need to evaluate, shortcutting
Subjects: H Social Sciences > HM Sociology
Divisions: Faculty of Social Sciences > Sociology, Department of
Depositing User: Users 161 not found.
Date Deposited: 17 Nov 2014 10:51
Last Modified: 04 Jan 2019 16:15

Actions (login required)

View Item View Item