December 20, 2022

Knowledge-testing survey Q’s may curb later responses

Pocket Science: Exploring the 'What,' 'So what' and 'Now what' of Husker research

Illustration of survey bubbles
Scott Schrage | University Communication and Marketing

Scott Schrage | University Communication and Marketing

Welcome to Pocket Science: a glimpse at recent research from Husker scientists and engineers. For those who want to quickly learn the “What,” “So what” and “Now what” of Husker research.

Pocket Science icon

What?

Persuading people to complete a survey can prove difficult enough in its own right. But the challenges hardly end there.

Survey designers are wary of introducing questions that put too much of a cognitive burden on respondents, who may react to subsequent questions by not responding or simply agreeing with — acquiescing to — whatever opinion is presented. Non-response and acquiescence can hurt the validity of a survey, yielding results that don’t accurately capture the beliefs or knowledge of a respondent.

The issue is especially relevant in omnibus surveys, which include questions from multiple clients who pay to add their items but often have little say over where those items will land in the surveys. If a client’s items follow unrelated questions that are likely to induce non-response or acquiescence, the quality of their data can suffer as a result.

To date, little research has looked at whether the placement of two burdensome question types, in particular, might pose problems for the clients whose items follow them:

  • Social network questions, which generally ask respondents to list and describe their personal connections
  • Knowledge questions, which ask respondents to select or provide a factually correct answer

So what?

Angelica Phillips
Phillips
Nebraska doctoral student Angelica Phillips and RTI International’s Rachel Stenger, a Husker alumna, went searching for answers in the 2010 General Social Survey, administered every two years to a national sample of U.S. adults. More than 1,200 of the respondents received a version that varied the placement of the social network questions, with some surveys featuring those questions higher up and others further down. A subset of those respondents also received science-based questions that tested their knowledge.

Contrary to the team’s expectations, neither the placement of the social network questions nor the presence of knowledge questions produced any real signs of acquiescence. In another surprise, respondents showed little evidence of non-response after encountering the social network questions.

But that non-response did emerge in respondents receiving the knowledge questions, as those respondents proceeded to skip more of the questions that came after — an indication of the burden that survey designers try to avoid. Phillips and Stenger also found that age moderated the effect: Non-response was greater in respondents over 65, and those aged 45-64, than in respondents aged 18-44.

Now what?

Given the non-response that followed the knowledge questions, Phillips and Stenger suggested introducing those items near the end of surveys to help limit the number of questions that go unanswered. Future research should also investigate whether the effect manifests in other types of surveys, the researchers said.