Quality Control

Twenty-three filters. Every one documented. Every rejection logged.

Our quality framework isn't a black box. You can see exactly what was rejected, why, and what the flagged-but-retained cases looked like for every study you run with us.

Quality assurance team reviewing survey data validation reports and metrics
23
Quality checkpoints per response
34%
Average fraud rejection rate
4.2%
False positive rate (manual review)
24h
Quality audit turnaround
The Full Framework

What we check, when we check it

Organized by when each filter runs in the respondent journey.

1

Pre-entry validation

Runs before the first survey question loads — 11 checkpoints

Geo-IP Triangulation

Device IP must resolve to within 120km of the profile's registered country. Mismatches above 500km trigger automatic rejection. Mismatches between 120–500km require a secondary location signal (mobile carrier data or Wi-Fi SSID country code) to proceed.

VPN & Proxy Detection

We maintain a live database of 48,000+ known VPN exit nodes, data center IP ranges, and residential proxy services. Any match against this database triggers rejection. The database is refreshed every 6 hours from three commercial threat-intelligence feeds.

Device Fingerprinting

Hardware-level device identity (based on 94 device attributes, not cookies) is matched against our global device registry. The same physical device cannot contribute more than one response per study, regardless of browser, IP, or profile identity used.

Bad-Actor ID List

We maintain a cross-panel exclusion list of approximately 340,000 profiles identified as fraudulent through our own detection and through industry data-sharing agreements with three major panel networks. This list is matched before every study invitation.

Profile Freshness Validation

Panel profiles must have been updated (confirmed or activity-refreshed) within the last 180 days to be invitation-eligible. Stale profiles—particularly common in B2B panels where job roles change—are quarantined for re-verification rather than silently removed.

Survey Frequency Rate Check

Completion velocity check: if a profile has completed more than three studies in the past 30 days, they are deprioritized in quota filling. If they've completed more than six, they are excluded for the current month regardless of quota pressure.

2

In-survey behavioral scoring

Runs in real-time during completion — 7 checkpoints

Per-Question Timing

Completion time for each question type (Likert, multi-select, open-end, ranking) is scored against type-specific benchmarks derived from 4.2M validated completions. Questions answered in under 40% of benchmark median trigger a timing flag.

Straight-Line Detection

Scale consistency index is calculated across all Likert-type items. A score above 0.91 (very high consistency across all items) is flagged—high consistency in real data rarely reaches this level even when respondents hold genuinely uniform views.

Embedded Attention Items

Two to four attention check items are embedded in each study, calibrated to study length. Unlike generic "select the third option" instructions, ours are topic-relevant and verifiable—failure indicates disengagement rather than misunderstanding.

3

Post-submission quality processing

Runs after completion, before delivery — 5 checkpoints

Open-End AI Similarity Scoring

All open-ended responses are embedded and compared against three similarity measures: cosine similarity to other responses in the same batch (cluster fraud detection), semantic relevance to the question asked, and AI-generation likelihood scoring. Responses flagging on two of three measures are rejected. One-flag responses are held for manual review and delivered within 4 hours with a scoring annotation in the data file.

Cross-Response Consistency

For studies with logically related question pairs, we run consistency checks—e.g., a respondent who reports "no IT budget authority" but claims to be "the primary decision-maker for software procurement" triggers a flag. These are scored rather than automatically rejected because role definitions vary.

Benchmarking

How our framework compares

Industry data from the Insights Association's 2024 Panel Quality Report, with our own figures.

Quality Metric Industry Average Top-Quartile Providers QRCSurvey
Pre-admission checks 4–6 8–10 11
Fraud rejection rate 8–12% 18–22% 34% avg
Device-level deduplication Cookie-based only Device fingerprint Hardware-level (94 attr.)
Open-end quality scoring Minimum length only Basic keyword filter LLM similarity + relevance
Quality audit delivery On request (3–5d) On request (24h) Automatic with every dataset
False positive rate Not measured ~8–12% estimated 4.2% (manual review verified)

Source: Insights Association Panel Quality Report 2024; QRCSurvey internal data (Jan–Dec 2024, n=4,200 studies)

Want to see a sample quality audit report?

We'll send you a redacted version from a recent comparable study—so you know exactly what you're getting before you run anything.