This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Quality Analyst within Prolific AI Data Services, you will play a critical role in ensuring the consistency, reliability, and integrity of the human data we deliver to customers. As task types and volumes grow, quality can’t rely on spot checks or tribal knowledge—this role helps operationalise quality through measurement, early detection, and clear feedback loops. You’ll work closely with Operations, reviewers, and Quality and Delivery leads to run day-to-day quality measurement, surface issues early, and help turn fixes into repeatable workflows that scale across studies and customers.
Job Responsibility:
Run day-to-day quality measurement for managed service studies, including sampling, rubric-based review, and structured reporting
Apply rubrics and guidelines consistently, flag ambiguities, and propose clear, actionable clarifications as task types evolve
Maintain and update golden sets, and support calibration efforts by preparing materials, tracking outcomes, and documenting decisions
Track defects and failure modes by categorising issues, assessing severity, and keeping defect taxonomies current
Run automated quality checks using SQL and Python or R, including completeness checks, schema and format validation, anomaly detection, and label distribution monitoring
Support launch gates by preparing readiness artefacts such as rubric versioning, gold status, sampling plans, and early pilot findings
Investigate quality drops by pulling examples, quantifying scope, and supporting root-cause analysis and corrective and preventive actions (CAPAs)
Build dashboards and weekly updates covering defect rates, rework, agreement, throughput versus quality trade-offs, and SLA risk
Partner with Operations, reviewers, and—when needed—Product and Engineering to improve task design, reviewer workflows, and escalation paths
Requirements:
1–3 years of experience in analysis, quality, operations, or data-focused roles, or equivalent academic projects or internships
A background in computer science, data science, statistics, engineering, or a related quantitative discipline
Strong SQL skills (required), with the ability to independently query, join, and summarise datasets to answer quality and operational questions
Working proficiency in Python and/or R, with experience writing scripts or notebooks to clean data, compute metrics, and automate checks
Strong analytical rigour, with a high bar for precision, consistency, and clear documentation of decisions
Solid statistical fundamentals, including intuition for sampling, distributions, confidence, and agreement concepts
Clear and structured communication skills, with the ability to explain issues using concrete examples and propose practical fixes
A collaborative mindset and willingness to partner cross-functionally to improve quality outcomes and delivery reliability
Nice to have:
Experience with human data workflows such as annotation or data labelling, or ML/LLM evaluation (rubric-based evals, preference data, red teaming)
Familiarity with QA concepts such as defect severity, leakage, root-cause analysis, and CAPA processes
Experience building or maintaining dashboards in BI tools such as Looker, Tableau, or Mode
What we offer:
competitive salary, benefits, and hybrid working within our impactful, mission-driven culture
equity
opportunity to earn a cash variable element, such as a bonus or commission