As part of Feedback Loop’s commitment to empowering our customers with more flexibility and control with our platform, users can set a target audience size for each individual test within a workspace. Our requests typically range from 50 to 700 participants per test but how do you determine the ideal number of people to collect feedback from to make your decisions?
To start, it is important to provide an overview of what agile research is and what it is not.
When deciding on a research method to use for your study, there are a couple of key factors that come into play: budget, timeline, and the goal of your study or the decision you are trying to make. Agile research is an accelerated and cost-effective method that provides directional data early and often to validate or invalidate decisions throughout the product development life cycle. The agile approach allows businesses to be more customer-centric, innovate rapidly, and iterate frequently.
Agile research is a powerful methodology that can help inform business initiatives within hours, so it’s best suited for product design, development, and positioning decisions. It can be used alone but works well when used in conjunction with other research methods to make decisions with confidence because larger decisions with the potential for greater business risk will require more data.
The following questions can help you determine the optimal test size based on the decision you are looking to make using agile research including Feedback Loop’s platform:
Where are you in the product development lifecycle?
Decisions become more impactful as you get closer to launch so customers will typically require more participants for their tests as they move along the product development process. However, it is critical to start testing early in the process so you set yourself up for success from the beginning and avoid the need to pivot later in the cycle, which is challenging and costly.
What decision(s) are you looking to make?
Option 1: In early-stage discovery or design research, a smaller target size of under 200 respondents is usually sufficient.
Option 2: In concept testing or UI evaluation research, target audience size largely depends on the number of concepts being evaluated, and if you are evaluating via monadic or sequential monadic testing.
How deep does your analysis need to be?
The depth of analysis required can influence the number of participants required.
Simple tests: For simple tests where you’re reviewing the results of the population as a whole, smaller target sizes are sufficient.
Subgroups of data: For tests that require you to dig into individual subgroups of data, such as age group or gender, increasing the sample size can help accommodate more in-depth analyses.
It is essential to note that agile research relies on random sampling for participant recruitment and quota balancing is not supported. Feedback Loop results are usually within a 10% variance of the national census, so we recommend basing your analysis on the entire population and maybe a few large subgroups instead of getting too nuanced.
How feasible is your target audience?
To determine how many people in a specific audience you can realistically reach, you need to:
Understand (roughly) the total number of people that meet your criteria (Note: Google is your friend!)
Keep in mind only 5-10% of the general population actively participates in online research.
Total Population Size |
Expected Test Size* |
Example Audiences (US) |
5 million+ |
500-700 |
People with a credit card (230 million) |
3-5 million | 250-500 | People who bought an American-made car in the past year (5 million) |
2-3 million | 100-250 | Pet owners of medium-sized dogs (2.1 million) |
Under 2 million | 0-100 | Financial advisors (300 thousand) |
*Assuming a full 48 hours in the field.Do you have any data or past tests that you will use to inform this research?
Once you’ve defined what you’re looking to learn and how you will use the test data, you can start to incorporate data you’ve previously collected.
Existing data can assist in your study design process and can help to avoid making assumptions when creating your survey instrument. It can also impact the number of consumers you need to collect feedback from. For example, if you already have some data on whatever you are looking to test with Feedback Loop, you may not need as many participants as if you were exploring a new topic entirely.Will you be running the survey again?
You’ll want to make sure that you can reach the same size audience consistently. Think about culture, politics, and economics - will the feasibility of this audience change over time?Will the test be able to run for the full fielding window?
Feedback Loop’s tests publish when the desired number of completes is met or within 48 hours, whichever is sooner. If you require data before the 48-hour time period is up, you can finish the test early. However, you may end up with fewer responses than expected. Are you running any other tests with this audience right now?
Feedback Loop employs a lockout for all customers to ensure distinct respondents in your study results over time. If a participant completes a survey of yours, they are not eligible to complete another for 7 days.
Therefore, consider “competing” priorities: If you are running multiple tests for the same audience simultaneously, think about the overall feasible size of that audience and divide that by the number of tests you have live. That is how large your test size will realistically be.
For information on how to set a new size for individual tests on our platform, see Audiences - Setting Sample Size.