Welcome to Pocket Science: a glimpse at recent research from Husker scientists and engineers. For those who want to quickly learn the “What,” “So what” and “Now what” of Husker research.
Academic departments often administer low-stakes assessments for the sake of tracking how well students are learning and retaining core concepts as they progress through a program.
The assessments are usually ungraded, making them easier to administer online. But without a grade to motivate them, some students don’t invest full effort in the assessments, potentially yielding deceptively low scores that don’t reflect their conceptual understanding. Those misleading scores can then misguide departmental efforts to improve curriculum and instruction.
Despite that, precious few studies have investigated how best to account for motivation by filtering out misleading scores.
Nebraska’s Brian Couch and Crystal Uminski analyzed data from more than 8,000 undergraduates who took GenBio-MAPS, a biology-focused assessment co-developed by Couch. That data included students’ ratings of their own effort levels; the percentage of questions that students spent a reasonable amount of time answering; and overall time spent on the assessment.
The researchers explored multiple ways of filtering the data, with the goal of culling only the low scores that clearly stemmed from low effort. Couch and Uminski concluded that a dual filter — one that removed students who either rushed through at least 40% of questions or finished the entire assessment in an unreasonably short time — best captured the low-effort behavior most responsible for low scores.
Filtering by self-reported effort level, though easy, removed a notable number of high scores. For that reason, the researchers suggested avoiding it as a filtering variable.
The dual filter could be adapted and applied to other ungraded assessments of conceptual understanding, the researchers said, including those beyond the realm of biology.
Identifying and removing low-effort scores could help faculty better determine what students are learning, what they’re not, and what should be maintained or changed, accordingly.