Updates to Alias v015 Effort Scoring

We’re excited to announce a significant improvement to Roundtable Alias: continuous effort scoring with user-defined thresholding.

Blog image for effort scoring blog post.
6/28/2024. In Alias-v013, we introduced a new content flag to identify "Low-effort" responses. These are responses that are coherent and on-topic, but minimally informative and may indicate that the participant is not engaged with the survey.
We introduced this flag using a predetermined threshold for what constituted a "low-effort" response. This fixed threshold was used across all surveys to automatically flag responses that fell below it. As we worked with researchers, we realized that a one-size-fits-all threshold wasn't ideal. Different types of surveys, research goals, and respondent populations often require different standards of what constitutes acceptable effort - what might be considered a low-effort response in an in-depth qualitative study could be perfectly acceptable in a quick pulse survey. Researchers often have the best understanding of their specific research context and what level of effort is appropriate for their particular study.
To address thisAlias v015 returns effort scores fr each question, ranging from 1 (lowest effort) to 10 (highest effort). The API also allows users to define custom thresholds for effort scores for flagging low-effort responses. This is implemented through a new low_effort_threshold parameter in our API. When making an API request, researchers can now specify this threshold as an integer between 1 and 10. Any responses with an effort score at or below this threshold will be flagged as "low-effort."

Effort score example

Below is an example dataset with responses for each effort rating, 1 through 10. By changing the threshold, you can quickly filter and group participants' responses.
Question Response Effort Score
As this example illustrates, both low- and high-effort responses can be problematic. A high effort score - especially one above 8 - is often correlated with GPT or AI generated content. In general, researchers should take care to investigate responses with unusual effort scores, just as they double-check other outliers.
By allowing researchers to set different thresholds for different surveys, we're moving beyond a one-size-fits-all approach to data quality. Researchers can adapt our flags based on their goals and populations, saving time while ensuring high-quality data. We welcome your feedback on this new feature and look forward to continuing building tools to improve survey research.