There's a number that most research directors have memorized: the cost per complete. It shows up in every RFP, every procurement conversation, every post-project invoice. What's less often tracked—and substantially harder to calculate—is the cost per bad complete that makes it into your final dataset.
This piece isn't an argument for spending more on panels. It's an attempt to make the cost of cheap panels visible in the places it actually shows up: corrupted segmentation, inflated effect sizes, failed replications, and the quiet credibility hit that comes from presenting findings that don't hold up six months later.
The obvious problem (and why it's not the main one)
The obvious risk is mean skew. Fraudulent respondents—bots, panel farms, or just disengaged participants clicking through for incentives—tend to answer randomly or towards scale extremes. Add enough of them and your mean shifts.
But this is actually the most detectable form of the problem. Mean outliers show up in descriptive statistics. A diligent analyst catches them. The more insidious damage happens at the multivariate level, where 8–15% contamination—within the range a typical unverified panel might produce—doesn't move means enough to trigger concern but absolutely corrupts the variance-covariance structure that segmentation, factor analysis, and regression models depend on.
// A concrete example
We ran a simulation on a real B2B dataset (n=800, tech sector, mid-market decision-makers). Artificially injecting 10% random-response contamination shifted top-2-box purchase intent by 2.4 points. Plausible noise, probably wouldn't flag. But it collapsed a clean three-segment solution into a two-segment solution with poor discriminant validity. The strategic recommendation that would have come from the three-segment model—different go-to-market for each segment—disappears into a generic midpoint recommendation that serves no one well.
Where the cost compounds
The practical damage shows up in four specific places that are worth naming precisely.
1. Conjoint and MaxDiff attribute utilities
Choice-based conjoint is particularly vulnerable. The technique relies on the internal consistency of trade-off patterns across respondents. When a portion of your sample is responding randomly—or systematically gaming the task by selecting the first option in every choice set—it inflates the variance in part-worth estimates. Your confidence intervals widen. Features that actually matter to buyers appear less differentiating than they are. Pricing models built on contaminated conjoint data have a well-documented history of overestimating willingness to pay in premium segments.
A 2023 analysis published in the Journal of Marketing Research found that conjoint studies using panels with documented quality controls produced price sensitivity estimates that were 22–31% more precise (narrower confidence intervals, same n) than studies using uncontrolled panels. The difference wasn't in the questionnaire design or the analysis. It was in who answered.
2. Segmentation validity
Latent class analysis and k-means segmentation are sensitive to small populations of unusual responders. A cluster of random-response observations doesn't merge cleanly with legitimate segments—it tends to either form its own garbage segment (which gets dropped in cleaning, taking real data with it) or fragment adjacent segments in ways that are difficult to diagnose after the fact.
The downstream cost here is at the strategy layer: segmentation-based targeting that doesn't reflect real market structure leads to messaging that doesn't resonate and sales prioritization that doesn't convert. Research teams rarely get the attribution signal that would let them trace poor campaign performance back to flawed segmentation inputs.
3. Open-end content quality
Qualitative researchers have always known that open-end data from low-quality panels is largely useless. What's less appreciated is that it's actively misleading. AI-generated or templated responses—increasingly common as panel participants use LLMs to complete surveys—score well on traditional quality metrics (appropriate length, on-topic keywords) while contributing nothing to the thematic analysis. Worse, because they're coherent, they weight in text analytics and sentiment scoring as genuine signal.
"The most dangerous kind of bad data is bad data that looks fine."
A theme that appears because twelve respondents gave GPT-4-generated variations of the same response isn't a theme. It's an artifact. But without specific AI-generation detection, it shows up in your thematic codebook, gets cited in the executive summary, and influences product decisions.
4. Replication risk for published research
This one is specific to academic and quasi-academic research, but it's worth addressing directly because it's underappreciated. Studies published in journals like JAMA, MISQ, or the Journal of Consumer Research increasingly include methods reviewers who ask about sample sourcing in detail. A paper that used an unverified panel—even if it passed editorial review—is now exposed to replication attempts using better-quality data. When the results don't replicate, the original paper gets annotated. That annotation follows the researchers.
The cost isn't just methodological embarrassment. In regulated industries (pharma, medical devices, financial services), research that informs regulatory submissions or clinical guidelines faces a higher standard than it did five years ago. Panel sourcing documentation is no longer optional in those contexts.
How to estimate the actual cost in your context
Here's a rough framework. For each study, ask:
- What decision will this study inform? How reversible is that decision?
- What's the cost of the analysis layer (conjoint modeling, segmentation, regression)? Bad data doesn't reduce that cost—it just means you paid for analysis of garbage inputs.
- Will this study be cited externally—in a publication, a client presentation, a regulatory filing? What's the reputational cost of a failed replication or a methodology challenge?
- What's the re-field cost if quality problems are discovered after delivery? Most panels don't offer remediation credits large enough to cover it.
For most enterprise B2B studies, the analysis budget runs 3–5× the panel budget. You're spending $8,000 on panel access and $35,000 on analysis and reporting. The analysis cost is fixed regardless of data quality. If contamination means the segmentation doesn't hold, you've lost the $35,000 in addition to the $8,000—and the cost of the business decision made on bad findings.
// What "34% rejection rate" actually means
Our average fraud rejection rate is 34%—meaning that out of every 100 people who attempt to enter a study on our platform, approximately 34 are rejected before completing. That's not a quality problem on our end. It's a reflection of how many people in a typical unverified panel pool shouldn't be in the data. If your current provider shows you a 5% rejection rate, they're either running very few checks or they're not showing you the number.
What to look for in a panel provider
This isn't a sales pitch. The questions below are the right ones to ask any panel provider, including us.
- What is your documented rejection rate by fraud detection checkpoint? Not an aggregate—by checkpoint.
- Do you provide a quality audit report with every delivered dataset, or only on request?
- What is your device-level deduplication approach? Cookie-based deduplication is insufficient—it takes 30 seconds to clear cookies.
- How do you handle open-end quality? Minimum character length isn't a quality check.
- What is your false positive rate—how often do you reject legitimate respondents? Any provider who claims 0% has no effective quality system.
The last question is the most important. A provider who can't give you a false positive estimate hasn't measured it. A provider who says their false positive rate is zero has optimized to avoid the embarrassment of rejecting legitimate responses—at the cost of accepting fraudulent ones.
The honest conclusion
Data quality costs money. Verification processes take time. Rejection rates above 30% mean you need to field more to hit your target n. None of this is free, and it shouldn't be presented as free.
But the alternative—running a $50,000 study on data that has 10–15% contamination baked in—isn't cheaper. It's just that the cost is invisible until the findings are wrong, the strategy misfires, or the journal review comes back with questions about sample sourcing that you can't answer.
The research teams that consistently produce reliable findings aren't the ones with the lowest cost-per-complete. They're the ones who track cost-per-useful-complete—and understand that those are different numbers.
Related articles
Want to see our rejection rate breakdown?
We'll send you a redacted quality audit from a comparable B2B study—so you know what to expect before you run anything with us.
Request a Sample Audit Report