This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Product Content Engineering is a horizontal function supporting initiatives across Meta's family of apps. We partner closely with product and technical teams to solve problems by providing content-centered solutions, setting standards of quality, and building the frameworks that ensure AI-powered experiences actually work for people. We're looking for a Content Engineer to join our AI Discovery team and help define how Meta evaluates and improves AI content experiences. You'll work at the intersection of content quality, AI evaluation, and the search and recommendation systems that power Meta's products: building the frameworks, rubrics, and pipelines that hold AI outputs to a high standard. You'll assess model behavior, identify where it falls short, and work cross-functionally with engineering, product, research, and data science teams to make it better.
Job Responsibility:
Define content quality standards and use them to systematically evaluate how AI models are performing across our products and content experiences
Design golden sets, taxonomies, and guidelines that enable consistent, repeatable content quality assessments
Build repeatable workflows for collecting, annotating, and analyzing AI outputs so evaluations can run efficiently as models evolve
Evaluate successive model releases through structured comparison, documenting what improved, what regressed, and what to prioritize next
Design evaluation frameworks that integrate qualitative and quantitative signals to measure dimensions like user trust, content depth, and topical relevance
Develop processes to track content quality and model performance over time and flag regressions
Synthesize evaluation results into structured error patterns and concrete recommendations that engineering and product teams can act on
Work cross-functionally with engineers, data scientists, product managers, and content strategists to align AI behaviors with real-world user expectations
Requirements:
5+ years of experience working collaboratively with product, engineering, design, and user research teams
1+ years working with generative AI products, AI evaluation, prompt engineering, annotation, and/or content labeling and analysis
Experience designing and implementing evaluation frameworks, annotation guidelines, or quality rubrics for AI/ML systems
Demonstrated data analysis skills, with experience exploring data, identifying patterns, and producing actionable insights
Experience building new products or platform/ecosystem products
Critical thinking, experience leading data-driven analyses to inform product or content decisions, and experience communicating to executive leadership
Proven track record of cross-functional collaboration and delivering results in environments with evolving requirements and competing priorities
Nice to have:
Experience with Python, SQL, or other tools for data analysis and evaluation automation
Familiarity with AI evaluation methods such as human eval, model-as-judge, A/B testing, or red-teaming
Experience building dashboards, scripts, or workflows that codify evaluation metrics
Background in content strategy, information quality, or trust and safety
BA or BS in Computer Science, Data Science, Linguistics, or related field