What If the Problem with Surveys Is That We Make Them?
Survey response rates are dropping, data quality is suffering, and respondents are tuning out. Researchers tweak wording, test incentives, and add interactive elements, but what if the real issue is control?
The Lego Effect tells us that people value things more when they have had a hand in creating them. Lego isn’t always about just following the instructions.
My mum plays with my Nephew’s lego, and she loves choosing her own pieces from across different sets (and she still has mine from the 80s!) to construct something new, creative & unique.
But in market research, respondents can often just be passive participants, recipients of our carefully crafted questions with no say in the process. What if we allowed them some control, the ability to script and let them build the survey?
Would they feel more engaged? Would they take the research more seriously? Or would they just create the worldβs most chaotic, nonsensical questionnaire? And if they did, would that even be a bad thing?
Right, So How Would This Even Work? π€
Well, this is the question. I am hoping others can chip in to see where this could go. But hey ho, here are some starter thoughts:
1. βTell Us What Mattersβ Mode π―
Instead of a static list of questions, respondents define the focus before the survey even starts.
- They pick what they actually care about
- The survey adapts dynamically, building itself in real time
- No wasted questions, only relevant insights
Example:
β Instead of forcing every respondent through 15 questions about soft drinks, we let them opt in to topics they actually engage with
β Do we find that 80% of respondents actually do not care about what we thought was the key issue?
β This approach already works in education. Think Duolingoβs progress reports (there are many better example, but I use it a lot) where users see personalised feedback based on what they actually interact with.
What if surveys worked the same way outside of the usual profiling parameters collected?
2. The βReverse Surveyβ π
What if instead of us asking them questions, they asked us what they wanted to know?
- We let respondents write the questions not just answer them.
- AI clusters their submissions, finds common patterns, and builds the real study from their input (I guess this is like how ‘qual before quant’ works – but it is slightly different I feel).
- Instead of guiding responses, we get unfiltered curiosity. What do consumers actually want to explore?
Example:
β Instead of asking βHow important is sustainability in your purchase decisions?β we ask:
β βWhat would you ask a company about its sustainability practices?β
β Do we get more honest, meaningful questions than researchers would have written?
3. The βMinimalist Surveyβ Challenge βοΈ
If respondents had to design the shortest possible survey, what would it look like?
- How many questions would they actually keep?
- What is the absolute minimum we need to get useful data?
- Could we run a study with only one question but make it the right one?
Example:
β Instead of βRate this product on 12 attributes,β we ask:
π‘ βWhat is the one thing that matters most about this product?β
β Does this approach unlock deeper insights with fewer words?
Isnβt This Just an AI Agent-Based Prompt Survey? π€
Good question. Maybe? While they might seem similar at first, they are actually fundamentally different in how control is distributed and how insights are shaped.
Key Differences Between AI Agent-Based Prompt Surveys & Respondent-Built Surveys (in my eyes)…
Caveat: An expert on Agentic AI for MRX can correct me on any of the below as I would be happy to learn more and be educated. I am in novice mode.
| Feature | AI Agent-Based Prompt Survey | Respondent-Built Survey |
|---|
| Who controls the survey? | The AI dynamically generates and adapts questions based on prior responses | The respondent actively chooses and shapes the survey content before answering |
| How questions are formed? | AI predicts the next logical question based on real time data | Respondents manually decide what is important to ask before seeing any questions |
| Flow of questions | Adaptive and conversational. AI fine-tunes based on responses | Pre-set by respondents, meaning the questions are locked in before the survey starts |
| Data flexibility | The AI ensures comparability of responses across users | Responses may be fragmented since each participant could generate a completely different survey |
| Level of automation | Highly automated. AI continuously refines its questioning | More manual. Respondents act as the βquestion designersβ before answering |
| Core objective | Improve engagement by making the survey feel like a dialogue | Improve engagement by making respondents feel ownership over the process |
| Risk of chaos | AI maintains structure, ensuring useful data | If left unchecked, respondents could create surveys with incoherent or irrelevant questions |
How is It Different in Practice? π
AI Agent-Based Prompt Survey π£οΈ
- Imagine a chatbot asking you about your favourite snacks. If you say chocolate, it follows up with
βWhat kind of chocolate do you prefer: milk, dark, or white?β - The AI learns and adapts. It is like an interviewer who keeps refining the conversation
- The goal is real time intelligence gathering
Respondent-Built Survey π οΈ
- Instead, imagine this. Before answering anything, you are asked what is important to you in a snack
- You choose your own survey journey by selecting themes like flavour, price, sustainability, etc.
- You shape the questionnaire itself not just your responses
- The goal is to see how respondents prioritise research topics themselves
What About the Data Chaos? πͺοΈ
At the moment, this idea sounds exciting, disruptive, and maybe totally reckless.
What happens when every respondent builds their own survey?
Would the data be impossible to analyse, or would we uncover patterns we never expected?
Sometimes, research is too focused on controlling outcomes rather than embracing unpredictability.
Maybe the mess is where the best ideas come from.
So, I hear you saying – whatβs the point here? π
Maybe there isn’t one. Maybe it is just something to think about, but for years, market research has been about control. Tight scripting, precise methodology, ensuring we ask the right questions. But what if respondents already know the answers we are looking for and we just need to listen differently?
Maybe the future doens’t need to focus too much on AI replacing researchers, but about empowering respondents to be researchers themselves.
Could we build better research by giving up some control?
Or are we just one step away from absolute madness?
Maybe it is time to find out.
Who owns a panel and wants to run an experiment? π







