This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Mercor is at the intersection of labor markets and AI research. We partner with leading AI labs and enterprises to provide the human intelligence essential to AI development. Our vast talent network trains frontier AI models in the same way teachers teach students: by sharing knowledge, experience, and context that can't be captured in code alone. Today, more than 30,000 experts in our network collectively earn over $2 million a day. Mercor is creating a new category of work where expertise powers AI advancement. Achieving this requires an ambitious, fast-paced and deeply committed team. You’ll work alongside researchers, operators, and AI companies at the forefront of shaping the systems that are redefining society. Mercor is a profitable Series C company valued at $10 billion. We work in-person five days a week in our new San Francisco headquarters.
Job Responsibility:
Building robust pipelines to ingest, transform, and consolidate data from diverse sources (e.g., MongoDB, Airtable, PostHog, production databases)
Designing dbt models and transformations to standardize and unify many disparate tables into clean, production-ready schemas
Implementing scalable, fault-tolerant data workflows with Fivetran, dbt, SQL, and Python
Partnering with engineers, data scientists, and business stakeholders to ensure data availability, accuracy, and usability
Owning data quality and reliability across the stack, from ingestion through to consumption
Continuously improving pipeline performance, monitoring, and scalability
Requirements:
Proven experience in data engineering
Strong knowledge of SQL, Python, and modern data stack tools (Fivetran, dbt, Snowflake or similar)
Experience building and maintaining large-scale ETL/ELT pipelines across heterogeneous sources (databases, analytics platforms, SaaS tools)
Strong understanding of data modeling, schema design, and transformation best practices
Familiarity with data governance, monitoring, and quality assurance
Comfort working cross-functionally with engineering, product, and operations teams
Nice to have:
Prior experience supporting machine learning workflows or analytics platforms
What we offer:
Generous equity grant vested over 4 years
A $20K relocation bonus (if moving to the Bay Area)
A $10K housing bonus (if you live within 0.5 miles of our office)