The Lego Effect, But for Surveys: What If We Made Respondents Build Their Own Questionnaire?

What If the Problem with Surveys Is That We Make Them?

Survey response rates are dropping, data quality is suffering, and respondents are tuning out. Researchers tweak wording, test incentives, and add interactive elements, but what if the real issue is control?

The Lego Effect tells us that people value things more when they have had a hand in creating them. Lego isn’t always about just following the instructions.

My mum plays with my Nephew’s lego, and she loves choosing her own pieces from across different sets (and she still has mine from the 80s!) to construct something new, creative & unique.

But in market research, respondents can often just be passive participants, recipients of our carefully crafted questions with no say in the process. What if we allowed them some control, the ability to script and let them build the survey?

Would they feel more engaged? Would they take the research more seriously? Or would they just create the world’s most chaotic, nonsensical questionnaire? And if they did, would that even be a bad thing?


Right, So How Would This Even Work? πŸ€”


Well, this is the question. I am hoping others can chip in to see where this could go. But hey ho, here are some starter thoughts:

1. β€œTell Us What Matters” Mode 🎯

Instead of a static list of questions, respondents define the focus before the survey even starts.

  • They pick what they actually care about

  • The survey adapts dynamically, building itself in real time

  • No wasted questions, only relevant insights

Example:

β†’ Instead of forcing every respondent through 15 questions about soft drinks, we let them opt in to topics they actually engage with

β†’ Do we find that 80% of respondents actually do not care about what we thought was the key issue?

β†’ This approach already works in education. Think Duolingo’s progress reports (there are many better example, but I use it a lot) where users see personalised feedback based on what they actually interact with.

What if surveys worked the same way outside of the usual profiling parameters collected?

2. The β€œReverse Survey” πŸŒ€

What if instead of us asking them questions, they asked us what they wanted to know?

  • We let respondents write the questions not just answer them.

  • AI clusters their submissions, finds common patterns, and builds the real study from their input (I guess this is like how ‘qual before quant’ works – but it is slightly different I feel).

  • Instead of guiding responses, we get unfiltered curiosity. What do consumers actually want to explore?

Example:

β†’ Instead of asking β€œHow important is sustainability in your purchase decisions?” we ask:

❓ β€œWhat would you ask a company about its sustainability practices?”

β†’ Do we get more honest, meaningful questions than researchers would have written?

3. The β€œMinimalist Survey” Challenge βœ‚οΈ

If respondents had to design the shortest possible survey, what would it look like?

  • How many questions would they actually keep?

  • What is the absolute minimum we need to get useful data?

  • Could we run a study with only one question but make it the right one?

Example:

β†’ Instead of β€œRate this product on 12 attributes,” we ask:

πŸ’‘ β€œWhat is the one thing that matters most about this product?”

β†’ Does this approach unlock deeper insights with fewer words?


Isn’t This Just an AI Agent-Based Prompt Survey? πŸ€–


Good question. Maybe? While they might seem similar at first, they are actually fundamentally different in how control is distributed and how insights are shaped.

Key Differences Between AI Agent-Based Prompt Surveys & Respondent-Built Surveys (in my eyes)…

Caveat: An expert on Agentic AI for MRX can correct me on any of the below as I would be happy to learn more and be educated. I am in novice mode.

Who controls the survey?The AI dynamically generates and adapts questions based on prior responsesThe respondent actively chooses and shapes the survey content before answering
How questions are formed?AI predicts the next logical question based on real time dataRespondents manually decide what is important to ask before seeing any questions
Flow of questionsAdaptive and conversational. AI fine-tunes based on responsesPre-set by respondents, meaning the questions are locked in before the survey starts
Data flexibilityThe AI ensures comparability of responses across usersResponses may be fragmented since each participant could generate a completely different survey
Level of automationHighly automated. AI continuously refines its questioningMore manual. Respondents act as the β€˜question designers’ before answering
Core objectiveImprove engagement by making the survey feel like a dialogueImprove engagement by making respondents feel ownership over the process
Risk of chaosAI maintains structure, ensuring useful dataIf left unchecked, respondents could create surveys with incoherent or irrelevant questions

How is It Different in Practice? πŸ’­


AI Agent-Based Prompt Survey πŸ—£οΈ
  • Imagine a chatbot asking you about your favourite snacks. If you say chocolate, it follows up with

    β€œWhat kind of chocolate do you prefer: milk, dark, or white?”

  • The AI learns and adapts. It is like an interviewer who keeps refining the conversation

  • The goal is real time intelligence gathering
Respondent-Built Survey πŸ› οΈ
  • Instead, imagine this. Before answering anything, you are asked what is important to you in a snack

  • You choose your own survey journey by selecting themes like flavour, price, sustainability, etc.

  • You shape the questionnaire itself not just your responses

  • The goal is to see how respondents prioritise research topics themselves

What About the Data Chaos? πŸŒͺ️


At the moment, this idea sounds exciting, disruptive, and maybe totally reckless.

What happens when every respondent builds their own survey?

Would the data be impossible to analyse, or would we uncover patterns we never expected?

Sometimes, research is too focused on controlling outcomes rather than embracing unpredictability.

Maybe the mess is where the best ideas come from.


So, I hear you saying – what’s the point here? πŸ™ƒ


Maybe there isn’t one. Maybe it is just something to think about, but for years, market research has been about control. Tight scripting, precise methodology, ensuring we ask the right questions. But what if respondents already know the answers we are looking for and we just need to listen differently?

Maybe the future doens’t need to focus too much on AI replacing researchers, but about empowering respondents to be researchers themselves.

Could we build better research by giving up some control?

Or are we just one step away from absolute madness?

Maybe it is time to find out.

Who owns a panel and wants to run an experiment? πŸ˜…



Continue the conversation on the PunkMRX WhatsApp Community.

Get involved πŸ‘‰ HERE πŸ‘ˆ or click on the icon below. πŸ‘‡


Missed a previous post? Dive in below. 🀘