Brief 36: Face-to-Face Interviews, Cognitive Skill, and Non-response
EGAP Researcher: Lynn Vavreck
Other Authors: Andrew Gooch
Research Question: Does the type of survey affect whether or not people respond?
Preparer: Seth Ariel Green
Background:
Item non-response is a major issue in survey research, because it is often unclear what a non-response indicates. Some people might not respond because they are bored of answering questions, while others really might not know the right answer. For in-person surveys, non-response might also depend on the respondent’s comfort with the interviewer or the surroundings – or their desire to please the interviewer, succumb to social pressure, or even their ability to understand the questions. Given the increasingly large percentage of surveys conducted over the Internet without an interviewer – and the transition of in-person surveys to online modes, researchers would benefit from having accurate estimates of differential non-response across modes.
Research Design:
Gooch and Vavreck interviewed 1,010 adults at Television City in the MGM Grand Hotel, a research facility in Las Vegas run by the CBS broadcasting network. They gave people $5 Starbucks gift cards in exchange for answering the survey. After people agreed to take a survey, they were randomly assigned to be asked questions (similar to those in the American National Election Survey) by a professional interviewer in simulated living rooms or to take the same exact survey on computers in an office. On both the computer and in-person surveys, some of the questions had a “don’t know” option, while some did not – but the form of the question always matched across the modes. The surveys also included several factual questions to gauge whether non-response was related to accurate answers.The research design also included a measure of cognitive skill. Using a subset of the WORDSUM vocabulary items (which have appeared on the General Social Survey since 1972), respondents were asked to read a word and select the closest synonym from 5 choices. The words were selected from different difficulty levels to provide discrimination, and interviewers also read the word and choices aloud in the case of in-person interviews.
Results:
The in-person interviews featured much higher rates of item non-response. This held true both for questions in which “don’t know” was a possible response and for those in which it was not (and thus had to be volunteered or the person just skipped the question). On average, across all questions in which “I don’t know” was an explicit option, 20% of people interviewed in person did not answer, versus 17% on a computer. When “I don’t know” was not an explicit answer – when subjects either volunteered it in-person, or just skipped a question – those numbers were 8% and 6%, respectively. This held true both for political questions and non-political questions; 15% more people responded “don’t know” in-person when asked who wrote Moby Dick; this difference was 12% for a question about who the current Vice President is.
The factual questions showed that the decrease in non-response in the self-completed survey was offset by an increase in the percentage of correct answers compared to the in-person interviews.The most important finding, however, was that the increase in non-response in the in-person mode of interview was being driven by people with low levels of cognitive skill. Moving from high to low levels of ability, an otherwise average respondent can be up to six times more likely to say “don’t know” in a face-to-face interview than in a self-completed survey, depending on the type of question.
Policy Implications:
The findings imply that a certain type of respondent has a difficult time in the in-person interview – and is better able to answer the same survey questions when they complete the survey in a self-completed mode. This has implications for canonical findings that suggest many Americans know little about politics – because the surveys on which those findings were based were conducted via in-person interviews. This can apply to the work of polling organizations, the media, or census bureaus, and can be especially relevant around an election.
There is more item non-response in in-person interviews and that non-response is being driven by people with low levels of cognitive skills. The fact that there was an increased rate of correct responses to fact-based questions in the in-person interviews compared to the self-completed ones – with the concomitant decrease in “don’t knows” – suggests that it is not the case that people are randomly guessing in the self-completed modes. The in-person interview seems to keep people from answering even when they know the correct response.