Research Methodology

How we decide what makes it into your dataset

This page documents the actual logic—not the marketing version. If you're writing a methods section for a journal submission, this is the citation source.

Framework Overview

Three stages, twenty-three checkpoints

Our validation model runs across three distinct stages: before a respondent enters your survey, during active completion, and after submission before delivery.

The industry standard is to run 4–6 checks at the response level. That's not nothing, but it misses the category of fraud that's hardest to catch: sophisticated panel participants who know exactly how to game timing checks and attention questions.

Our pre-admission layer—before respondents see your first question—accounts for 11 of the 23 checkpoints. This is where we do the most consequential filtering.

Research analyst reviewing methodology documentation and data quality reports
Stage 1

Sampling & Recruitment Framework

Who gets invited into a study, and on what basis.

Probabilistic Quota Management

Quotas are applied against verified profile segments—not against raw panel counts. A "senior decision-maker" quota only fills from profiles with corroborated seniority signals, not self-reported titles. This adds 8–14 hours to feasibility assessment but eliminates a major source of segment contamination.

Recruitment Source Transparency

We maintain seven distinct recruitment channels and disclose which channels contributed to your final sample on request. Panels assembled from a single high-traffic source carry a homogeneity risk that blended-source panels don't—we document this in your quality report.

Incidence Rate Pre-Qualification

Before you finalize your screener, we run a feasibility pass against current panel counts to estimate realistic incidence rates. Most providers won't share this until after launch—we give you the estimate upfront because it changes study design decisions.

Survey Frequency Capping

Panel members cannot participate in more than three studies per rolling 90-day window. For specialist panels (healthcare, C-suite), this cap drops to two. Over-surveyed respondents develop habitual response patterns that are difficult to screen out post-hoc.

Stage 2

Identity Verification & Pre-Admission Checks

The eleven checkpoints that run before a respondent answers your first question.

Global data verification network showing international panel coverage

Deterministic identity matching

Self-reported demographic data is the weakest link in any panel. Our verification layer cross-references profile data against four external registries—professional licensing bodies, company registration databases, firmographic APIs, and public LinkedIn signals—before a profile is admitted to any study.

We don't verify every attribute for every respondent—that would be unusably slow. Instead, we verify the attributes that matter most for your specific quota cells. A study targeting hospital pharmacists requires pharmacy board verification. A study targeting CFOs at $50M+ companies requires revenue-band corroboration.

Pre-admission checkpoint summary

Geo-IP vs. profile address match Active
VPN / proxy detection Active
Device fingerprint deduplication Active
Known bad-actor ID list match Active
Profile freshness check (max 180d stale) Active
Firmographic attribute corroboration Study-specific
Full Quality Framework →
Stage 3

In-Survey Behavioral Scoring

Twelve additional checks that run during and immediately after completion.

01

Response Timing Analysis

Per-question completion time is compared against a distribution built from verified-quality completions of similar question types. Responses more than 2.5 standard deviations below median are flagged for review—not automatically rejected, because some legitimate respondents are fast readers.

02

Pattern Detection

Straight-lining (selecting the same response scale position repeatedly) and seesaw patterns (alternating between extreme positions) are scored on a 0–100 suspicion index. Responses above 72 enter manual review. We publish our threshold calibration methodology on request.

03

Embedded Validation Questions

Attention checks are designed using known-answer items specific to the study topic—not generic "select the second option" instructions. A C-suite tech study might include a question with a verifiable answer about a well-known industry fact, validated against the correct response.

04

Open-End Quality Scoring

Open-ended responses are processed through an LLM-based pipeline that scores for: minimum meaningful length, semantic relevance to the question, similarity to other responses in the same batch, and AI-generation likelihood. Responses scoring below our acceptance threshold are rejected before delivery.

Transparency by default

Every delivered dataset includes a quality audit report: rejection rates by checkpoint, quota performance vs. target, and flagged-but-retained cases with scoring rationale. You don't need to ask for this—it ships with your data.

Talk to a Panel Specialist
Common Questions

What researchers actually ask us

Yes. We provide a formal methodology citation document on request, formatted for APA, AMA, and Vancouver styles. This document has passed peer review scrutiny at Nature, JAMA, and several management journals. It describes recruitment sources, verification protocols, and quality checkpoints in language that reviewers accept.
Mid-survey terminations are recorded with a reason code. Respondents are notified of the termination but not given the specific rejection reason (to prevent gaming). Their profile is flagged for the relevant study type, which affects their invitation eligibility for future similar studies. This creates a behavioral incentive for quality that purely financial incentive structures don't produce.
All EU panel members have provided explicit, granular consent under GDPR Article 6(1)(a). Consent records are maintained with timestamp, IP, and specific consent language version. Data processing agreements (DPAs) are available for every client study. We are registered with the relevant supervisory authority in Germany, France, and the Netherlands, and maintain a data protection officer on staff.
Based on our manual review sample (we manually review 8% of all flagged responses), our false positive rate is approximately 4.2%—meaning about 4 in every 100 rejections are legitimate responses that triggered a filter incorrectly. This is a known limitation. We err on the side of exclusion, which is why we always field slightly above target sample size to account for this. The alternative—a 0% false positive rate—would require loosening filters that currently catch real fraud.
Yes, with a caveat. You can access a live data stream through our dashboard, but we apply a 15-minute processing delay to ensure preliminary quality filters have run before data is visible. Exporting unprocessed raw data during fielding is technically possible but not recommended—and we'll tell you so clearly if you request it.

Questions about our methodology?

Our research operations team can walk through any aspect of our validation framework—including the parts that make studies more expensive to run.