Quality Experiment: What’s The Best Market Survey Tool?
So you are thinking about running a market survey. You may have already picked a great survey tool, crafted your questions, and can’t wait to hit that “Publish” button to finally start getting insights. Maybe you want to see what people think of that fitness app idea. Or the keto diet you’ve been working on. Either way, I bet it’s something really important, and misleading data that will affect your decisions is the last thing you want to get.
If so, we need to talk about how flawed the survey industry is. And the reason is in the survey methodology that encourages responses in return for something else. Vouchers, discounts, free yoga classes, cash, you name it. And what happens is people get so caught up in those perks that they no longer care about the question itself.
We realized this was a big problem and created TruePublic where instead of a reward system, we tapped into human curiosity. And now a year later, when we have over 150k users, we wanted to measure how our data accuracy compares with our competitors’.
So we came up with a question and ran it on 4 different market survey platforms — Pollfish, MTurk, Prolific and our own TruePublic. The question was “How many times, if at all, do you visit Gatto Bagnato Coffee?”. But here is the trick — Gatto Bagnato Coffee doesn’t exist. So by the number of people claiming to have visited this place, we could get an idea of the data accuracy on each of these platforms.
Now, if you don’t have time to read the entire analysis, here are the results:
But if you are, I am going to tell you a lot about running questions on each of these platforms, including all the nitty-gritty details you definitely need to know before making a choice.
So let’s get started.
If anything Pollfish is great at is definitely its speed. From the minute we submitted the question for review, it took us less than 2 hours to get 400 responses. But it’s also pricey. Pollfish charges $1 per complete but it raises the price based on the selected targeting. With the age, gender, and location filters applied, our price jumped to $2 per response.
But the results didn’t look good at all. 30.5% of the Pollfish respondents said they go to Gatto Bagnato 1-3 times a week with 4.5% of them claiming to visit it almost every day. The error rate for the ages 16 – 34 was even higher — 39.9%. So how can you visit a place so often that doesn’t even exist? First, let’s see how Pollfish runs its market surveys.
Instead of hitting people in their inboxes, Pollfish partners with over 120K+ apps and recruits respondents through in-app invitations while they’re naturally engaged on their device. While this really factors into its speed of delivery, it looks like an intrusive experience.
Imagine you are in the middle of a game and suddenly a survey pops up on your screen which you have to answer to unlock a premium app feature. Would you take a minute to carefully read the question? Or just randomly pick an option to quickly get back to the game? It’s fair to assume that the whopping 30.5% of the respondents did exactly that and, as a result, almost half of them gave misleading and untrue answers.
Next, let’s look into MTurk. Amazon Mechanical Turk is by far the cheapest option in the market. Here, the targeting parameters are divided into two categories — systematic and premium. Systematic targeting mainly includes location. And if you want to select a premium targeting filter — or qualification as MTurk calls them — you’ll likely see the prices jump. And what’s even worse, you can choose just two Premium qualifications at a time, which definitely restricts your options to reach your target audience.
We also identified some limitations of the age targeting on MTurk. First, the age increments are hardcoded. So if you want to select a custom range, say from 28-33, you can’t do that. And what’s more, if you are targeting by the age of 55 and above, you’ll need to pay an additional $0.5 per complete.
With just the location targeting, our price per respondent was $0.02. Add to it Amazon’s flat fee of $0.01 per respondent and our cumulative price per complete ended up being $0.03.
Before we take a look at the results, let me point out a couple of facts about MTurk.
MTurk survey panelists are professional survey takers or, in other words, people who take surveys for money. They’ve already mastered what and how to answer questions to earn the highest amount while putting minimum effort and attention. But that’s not about it. Some researchers have also reported a large number of bot-like responses in their MTurk data that were detected with respondents’ repeated GPS coordinates.
Now back to our results. As you can see in the chart, a whopping 36.00% of the respondents said they have been to Gatto Bagnato Coffee at least once. This is definitely an extremely high error rate that besides just the careless answer selection has another explanation too.
You see, the reason why panelists may give an affirmative answer to this type of questions is that they are often used as a way to screen the survey audience. So if you haven’t visited a particular restaurant and the rest of the questions are about improving their menu, a “no” to that first question will disqualify you from participating in the rest of the survey (and earning money) straightaway. That’s why many reward-bound market surveys including MTurk’s are often doomed to be false right from the start.
Next in the line is Prolific. Prolific is the online research platform used by many renowned universities and research centers around the world including The World Bank and Yale University. What makes it stand out is its fine-grained targeting parameters. From personal finance to physical health, you can get really niche and run your survey on just any audience you can think of. So no surprise why Prolific is loved by so many academic researchers.
Prolific pays its participants according to the estimated study completion time with minimum pay of £5.00/hour ($6.60) or £0.083 ($0.11) per minute. What this means is you’ll need to include your best estimation of the completion time and if the median time happens to exceed it, Prolific will effectively adjust the price. As we’ll see from the results, Prolific’s strive of finding the balance between a decent hourly rate and completion time allows participants to focus on each individual question and in most cases offer their unbiased insights.
Namely, for our experiment, we ended up paying $0.17 per response. And the results were a huge improvement! The error rate on Prolific was 5.94% which, if we compare with the previous two, was over 6x more accurate.
While Prolific’s data accuracy is definitely its biggest asset, it comes with one huge drawback. Unlike Pollfish and TruePublic where you are offered tools to design your market survey and track your responses within a single dashboard, Prolific (just like MTurk) gives you access to just the participant panel. So the survey design and data analysis are left to you.
The way it works is that each Prolific participant is given an ID that you need to track to find their answer to each distinct question in the survey. And this requires a lot of setup effort. Plus, you’ll most likely need other tools like Qualtrics or Survey Monkey to make your survey and later use some data analysis software to analyze high volumes of responses. And let me tell you — you need to pay for most of these! Or else, brace yourself for some grueling manual work!
Now, let’s talk about our very own TruePublic because our survey methodology is completely different from what you saw in previous examples. We don’t have a reward system and don’t pay our users to answer questions. Instead, we make it fun. We created a cool app with many different spaces from politics to dating where people answer questions and see how others responded based on their age, gender, ethnicity, and political leaning. So there is practically no reason to give dishonest answers.
Our surveys are also short. Because… let’s be honest. People’s attention span has gotten so thin these days that expecting them to answer 10-15 questions in one sitting is just not going to happen. On the third question, you bet they are checking their Instagram notifications and on the fifth one, they just want to knock it off. Fast.
Because of this cool solution, we also attracted many Gen Z and millennials. This often rebellious, crazy segment is setting trends all over the place but you’ll hardly know what’s on their minds with a boring market survey or a cold call.
But we did it with our fun platform! Now they make up over 80% of our loyal users.
Now let’s talk about pricing. We obviously didn’t pay to run a question on our platform, but if we did it would have been just $0.25 per response. And our pricing doesn’t vary based on targeting. So whether you want to reach 50-year-old right-leaning people or 22-year-old college graduates, you’ll still pay the same $0.25 per complete. Because our audience acts on their curiosity, we also give an expert question review — rephrase questions, merge two together — so you can get the responses you need faster.
Now let’s take a look at the results. As you can see, the error rate on TruePublic was extremely low — just 0.6%. It is around 66x more accurate than Pollfish and MTurk and 10x more accurate than the second-best survey platform Prolific. And as you think about it, it all ties back to our methodology that almost excludes any incentive to get biased, untrue answers.
In TruePublic, we also created a section called “Reasons” where our users explain the reasoning behind their answer choice. And if you read through this comment thread, you’ll see that most people were clearly confused because they never heard of Gatto Bagnato Coffee. Because it doesn’t even exist 🙂
So if we put the results from the 4 survey platforms together, here is what we get. For this particular question, Pollfish had the highest average error rate, followed by MTurk, Prolific, and only then TruePublic which had just a 0.6% error rate. As you would assume, we couldn’t base our conclusions just on this one experiment. So, indeed, we did 4 of these, including testing other market survey platforms too. But this pattern stayed relatively the same.
Over to you
There is definitely a lot that goes into choosing a market survey platform. Pricing and targeting are important but if the data you are getting is inaccurate, why are you even running a survey in the first place? So think about this before making a choice.