Quality sample in online market research is obviously quite important to the results of a study. And while getting quality sample in a one-off survey can be difficult, long-term or repeat studies have a completely different set of rules. Any changes over time or between survey periods may be considered real changes, but how can anyone be sure if there are changes to the sample as well?
Regardless of if a study is a one-time thing or a repeated study, there are a number of data quality checks that should always be in place. The first thing to do when ensuring quality data is to develop thorough and all-encompassing screening questions. This guarantees that a survey will only be asked of those who are supposed to be in a survey get in. But even with the best screening criteria, there is still more work to be done. When doing online market research, it’s important to include things like quality control checks for the methodology, use IP address validation, checking for speeders (people going too quickly), red herring questions, verification questions, and open-ended questions.
Long-term studies like brand trackers or repeated studies like advertising measurement and new product development screening processes require even more extensive levels of quality check from researchers and their vendors. If there is something that changes in the screening criteria, in the panel(s) used for the research, or even the percentage of respondents that come from different sources it can be difficult or impossible to determine if changes and differences are because of these changes or if they are truly behavioral/emotional respondent changes.
By ensuring that a vendor is a quality provider before use, repeated use can be more easily trusted with just a few extra considerations. When evaluating a company for quantitative research, it’s important to also consider how they source their respondents and how large their pool of active respondents is. Category or same survey lockouts for a period of time, usually somewhere between 1 and 6 months, can ensure that respondents are truly varied and results are not just the same group of respondents each month. To be able to ensure this though, a panel must be quite large, especially when a sample is very specific. It also may be worth looking into specialized panels if a respondent group is particularly difficult to find or unlikely to respond in a wider panel.
When using panel aggregators, it’s especially important to understand the vendor blend. This is the percentage of total respondents that come from each specific vendor. Not only should the vendors used over time be the same, but the percentage of your sample that comes from them should be the same. For example, if there is Vendor A and Vendor B, the first wave would be 70% from Vendor A and 30% from Vendor B. In subsequent waves, Vendor A should still provide 70% of the respondents and Vendor B should provide 30% from Vendor B. This is a great question to ask ahead of time to aggregators or panel providers that have partnerships with other panels. It is also something that can be included in proposals or contracts for extra security.
Like we said, when sample is lacking in quality, results can be skewed or flat out wrong. When screening criteria changes, there are different types of people answering a survey. When a vendor blend changes, there could be differences in the panels causing different types of people to answer a survey. If a researcher can be assured that all the steps are being taken ahead of time to ensure quality data, a unique and correct story can be told with the results. Less time, if any at all, needs to be spent cleaning data after the fact, or even worse, refielding, which leads to quicker results and more time for analysis and finding insights.