This page documents the actual logic—not the marketing version. If you're writing a methods section for a journal submission, this is the citation source.
Our validation model runs across three distinct stages: before a respondent enters your survey, during active completion, and after submission before delivery.
The industry standard is to run 4–6 checks at the response level. That's not nothing, but it misses the category of fraud that's hardest to catch: sophisticated panel participants who know exactly how to game timing checks and attention questions.
Our pre-admission layer—before respondents see your first question—accounts for 11 of the 23 checkpoints. This is where we do the most consequential filtering.
Who gets invited into a study, and on what basis.
Quotas are applied against verified profile segments—not against raw panel counts. A "senior decision-maker" quota only fills from profiles with corroborated seniority signals, not self-reported titles. This adds 8–14 hours to feasibility assessment but eliminates a major source of segment contamination.
We maintain seven distinct recruitment channels and disclose which channels contributed to your final sample on request. Panels assembled from a single high-traffic source carry a homogeneity risk that blended-source panels don't—we document this in your quality report.
Before you finalize your screener, we run a feasibility pass against current panel counts to estimate realistic incidence rates. Most providers won't share this until after launch—we give you the estimate upfront because it changes study design decisions.
Panel members cannot participate in more than three studies per rolling 90-day window. For specialist panels (healthcare, C-suite), this cap drops to two. Over-surveyed respondents develop habitual response patterns that are difficult to screen out post-hoc.
The eleven checkpoints that run before a respondent answers your first question.
Self-reported demographic data is the weakest link in any panel. Our verification layer cross-references profile data against four external registries—professional licensing bodies, company registration databases, firmographic APIs, and public LinkedIn signals—before a profile is admitted to any study.
We don't verify every attribute for every respondent—that would be unusably slow. Instead, we verify the attributes that matter most for your specific quota cells. A study targeting hospital pharmacists requires pharmacy board verification. A study targeting CFOs at $50M+ companies requires revenue-band corroboration.
Twelve additional checks that run during and immediately after completion.
Per-question completion time is compared against a distribution built from verified-quality completions of similar question types. Responses more than 2.5 standard deviations below median are flagged for review—not automatically rejected, because some legitimate respondents are fast readers.
Straight-lining (selecting the same response scale position repeatedly) and seesaw patterns (alternating between extreme positions) are scored on a 0–100 suspicion index. Responses above 72 enter manual review. We publish our threshold calibration methodology on request.
Attention checks are designed using known-answer items specific to the study topic—not generic "select the second option" instructions. A C-suite tech study might include a question with a verifiable answer about a well-known industry fact, validated against the correct response.
Open-ended responses are processed through an LLM-based pipeline that scores for: minimum meaningful length, semantic relevance to the question, similarity to other responses in the same batch, and AI-generation likelihood. Responses scoring below our acceptance threshold are rejected before delivery.
Every delivered dataset includes a quality audit report: rejection rates by checkpoint, quota performance vs. target, and flagged-but-retained cases with scoring rationale. You don't need to ask for this—it ships with your data.
Talk to a Panel SpecialistOur research operations team can walk through any aspect of our validation framework—including the parts that make studies more expensive to run.