Company decision makers look to quality market research for insights. The accuracy of the research relies on quality operations in fieldwork - especially the integrity and the quality of the respondents participating in studies. As a company of researchers built for researchers, the team at Gazelle Global knows how to ensure the data you have at the end of fieldwork is a product you can trust and feel confident presenting to your client.
Here are a few thoughts on what it takes to consistently deliver the highest quality of market research insights by verifying data of participants and eliminating cheaters and repeaters in our system.
Don't Be a Dupel
We need to be aware that due to how sample is bought and sold today respondents may be invited to your survey more than once. With the mix of programmatic sampling, multiple accounts across many panels and sample aggregation it is a reality that must be dealt with. In addition to this type of duplication there are entities targeting market research to defraud us of the incentives we offer. These include bots and click farms. While bot and click farm activity is heavily monitored on the supply side of sample, you should also include means to track and trap them in your survey.
To keep the same respondent from participating again, Gazelle Global creates multiple layers of checks and balances. A few of these include:
Beyond duplication protocols, Gazelle Global fights fraud with advanced tracking for things like verification of the following items from the originating computer:
Don't Pet the Cheetah!
Before we talk about ways to keep cheaters and repeaters out of your survey, let’s first mention how you can keep them out before things even get started. The screener is the most powerful way to help maintain quality in market research data. At Gazelle Global, we pride ourselves in the many years of experience on our team that takes your outsourced research project seriously from the moment we take over any aspect of your project.
Don't Lead the Answer
Even before the survey starts, any screener needs expert review. Too often a screener is created where potential cheaters can easily see what they need to say in order to qualify for the study. For example, instead of asking what kind of medicine the respondent works in the respondent could be asked to mark the boxes of all that apply to their area of expertise.
The spread of possible answers should look something like this in terms of the spread of field:
In this way, you can make sure the screener is designed so they can't easily self-select into it. If you have a B2B study looking for a particular industry you can also look at this type of question as a way to ask questions that only could be understood by an expert in that given field. If they don’t know how to answer the question, they are unlikely to have the expertise to inform your study. Seeding lists with companies that don’t exist when you are talking to industry insiders is another option to help find those respondents who don’t truly have the expertise needed.
In terms of survey design, there are a lot of tips and tricks you can easily employ to identify and disqualify cheaters. Here are a few quick thoughts on small steps that can be implemented together as a strong foundation for improving the quality of your feedback for any study:
Set a Trap
Trap questions, like "Please mark a 2 in this question.” An even better system is to set several traps so that more sophisticated software simply can’t learn a pattern. For example, in addition to the first basic request to enter a 2, another trap question could be for someone to enter the age of their home. If 1,000 appears as an answer, this is an immediate “strike” in the system.
Verify the Data
Screeners often include a specific answer from the qualified respondent. For example, if this study was only for mothers with twins, or respondents who have spent more than $10,000 on travel in the year, etc. a specific verifying question can be planted within the survey.
Having multiple traps and verifications create a better understanding of the validity of the answers. In general, bots will follow patterns while humans taking a survey may simply check out in terms of attention span. Missing one trap of verification doesn’t totally disqualify a respondent, but a scoring system should be in place for evaluation.
Listen Up
Verbatims are also a way to add qualitative answers to what might be a largely quantitative survey. Even legitimate respondents can get survey fatigue and press 10 on every answer, or accidentally type 1000 when they meant to report their house is 10 years old. But adding at least one qualifying open-ended answer can help double-check the system. These responses give you a good indication if your “strikes” are in line with those being identified as possible cheaters.
Test the Sample
While verbatim standards can help identify cheaters, each survey may not have the budget for adding the personal labor needed to sort through these answers. Instead, bad sample can be tested through a series of likely vs. unlikely responses. Consider a survey where you were quickly asked the following 4 questions:
These would take very little effort for the respondent, but the likelihood these items all being true or all being false would be incredibly unlikely, therefore providing more data to make decisions on bad data. Simple probability of answers leads systems to score people based on the answers to decide whether or not they can proceed into the survey.
Retest Consistency
Another easy way to test the validity of a respondent is to match up a simple set of two questions. For example, at the beginning of a survey, an innocuous question such as a birth year can be asked. At the end of the survey, another question might be placed to ask for the respondent’s age. The consistency of matching a set of questions can provide a security checkpoint for a cheater or faker.
There are many ways to build a system and create the kind of strikes or scoring needed to accurately remove invalid respondents regardless of their intentions. It’s important to have a great balance in a proactive approach to cleaning data. Valid respondents are human and can get tripped up on a survey. Bias can be introduced by rejecting too many people where we try to fit them into answering a survey as what we might view as perfect, not answering one question “properly” doesn’t necessarily invalidate the whole interview. But uncovering the difference between simple mistakes and errant respondents is the job of an expert fieldwork team.
At Gazelle Global, our mission is to provide operational global research services for optimal outcome. We don’t “run the race” of your research per se, but we’re here to take any piece of the “relay” you need accomplished. Getting quality data quickly and accurately is our passion. We know your next business decision may rely on the data we help collect. It’s our goal to partner by creating effective systems to provide the best data possible. Our multi-faceted and proactive approach makes us a great choice for quality data collection.
As a last bonus thought, we offer clients a mindset where a slow start to field is an option. When you have a massive global study to be completed it can be powerful to test a small sample and get an opportunity to iterate on the screener, survey and filtering of responses. Starting with a sample of the intended magnitude of the project can help our teams work together to make nuanced changes before expensive mistakes are made. In this way, this focus on quality data collection is at the forefront of every conversation, providing you total confidence in your data.
Because Gazelle Global is an outsourced operations team for market research leaders, we bring our expertise to every project and create dynamic solutions for the ever-changing challenges posed by cheaters and repeaters.
We would love to be a part of providing quality insights from quality data.