We have been quick to label AI-generated survey responses as fraud. But is that always fair?
Sometimes, a respondent using AI produces a better answer than they could have written alone. It could help them organise thoughts, clarify language, and give them the confidence to express ideas they might otherwise hold back.
As researchers, we want responses that are authentic, thoughtful, and meaningful. But should we expect everyone to express themselves perfectly, especially when surveys can be stressful or challenging? Perhaps the real question is not whether AI was used, but whether the response still reflects the person behind it.
Here are five ways AI could help and five ways it could hurt.
When AI Helps
- π Clarifying complex topics: AI could help respondents understand technical terms or unfamiliar subjects so they can answer more precisely.
- π Overcoming language barriers: For second-language speakers, AI could make it easier to express what they mean clearly and accurately.
- πΆ Navigating sensitive subjects: On topics that feel personal, stigmatised, or emotional, AI could give respondents the confidence to express their thoughts more fully
- π Organising thoughts: Some respondents struggle to structure their answers. AI could help them organise ideas into coherent, readable responses without changing the meaning.
- β³ Reducing fatigue in long surveys: In lengthy or repetitive surveys, AI could help maintain focus and consistency, which might otherwise fade as respondents get tired.
When AI Hurts
- π€ Fabricating experience: AI could invent details the respondent never experienced, which would make the data misleading.
- π Homogenising responses: Over-reliance on AI could strip away individuality, making many answers sound the same and reducing the richness of insight.
- β Disconnecting from the question: AI could generate answers that are technically correct but fail to directly address the specific question asked.
- π Masking disengagement: If a respondent uses AI as a shortcut without caring about the content, the answer might not reflect their own view.
- π« Hiding contradictions: AI could smooth over contradictions or inconsistencies that might otherwise reveal important truths about how the respondent really feels.
Rethinking What We Value
We need to ask ourselves: are we testing whether respondents can write perfectly, or trying to understand how they think and feel?
If we value only perfectly worded answers, we might exclude respondents who struggle to express themselves. If we accept AI without question, we risk collecting answers that sound good but reveal little.
Perhaps it is worth asking whether some respondents need help to articulate what they think. Could we start by defining what is acceptable and being clearer about how we expect people to respond?
We might also consider designing questions that focus less on polish and more on substance, encouraging people to share what is truly theirs, even if they get help phrasing it.
Authenticity does not always mean perfect grammar or elegant language. Sometimes it appears in the imperfect, messy details that reveal more than any polished answer could.
Final Musings
Not every AI-assisted response is a problem. Some could make our data better, not worse. Perhaps our role is to notice the difference and design research that keeps the human present.
How would you handle a response that is thoughtful and clear but clearly polished by AI?







