This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Meta is seeking a Data Scientist to join the Evaluations team within Meta Superintelligence Labs (MSL). Evaluations are the core of AI progress at MSL, determining what capabilities get built, which features get prioritized, and how fast our models improve. As a Data Scientist on this team, you will be responsible for the scientific rigor behind our frontier AI benchmarks. You will work in tandem with world-class Research Scientists and Engineers to design, validate, and analyze novel evaluations that shape the future of AI capability measurement. This role is for a technical Data Science expert who can bridge the gap between abstract model capabilities and rigorous, unbiased measurement. You will lead the charge on sampling strategies for various AI tasks, critically examine benchmark quality and validity, and perform deep-dive analysis on current frontier models’ failures and limitations. You will have the opportunity to conduct novel research, think creatively about measurement in uncharted territories, and contribute to the global AI community.
Job Responsibility:
Scientific Design & Validity: Lead the design of evaluation stimuli and benchmarks, ensuring they have minimal bias and high construct validity for frontier LLM capabilities
Experimental Methodology: Design and execute effective sampling strategies and experimental frameworks to measure model performance and errors accurately
Deep-Dive Analysis: Perform rigorous data and model error analyses to provide deep insights into model behavior, quality gaps, and failure modes
Collaborative Research: Partner closely with Research Scientists and Engineers to translate organizational priorities into measurable, scientifically sound benchmarks
External Impact: Drive the publication of novel evaluation research and the open-sourcing of benchmarks to influence the broader AI research community
Strategic Influence: Use data-driven insights to influence research directions and major model development lines within MSL
Requirements:
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent practical experience
A minimum of 6 years of work experience in analytics (minimum of 4 years with a Ph.D.)
Experience with data querying languages (e.g. SQL), scripting languages (e.g. Python), and/or statistical/mathematical software (e.g. R)
Nice to have:
Advanced Quantitative Background: Master’s or Ph.D. in a quantitative or experimentation-heavy field (e.g., Statistics, Psychology, Economics, Quantitative Social Sciences, or a related technical field)
Publication Record: Publications at top-tier peer-reviewed venues (e.g., NeurIPS, ICML, ICLR, ACL, or field-specific journals) related to measurement, evaluation, or experimental design
Evaluation Expertise: Recognized expertise in language model evaluation, psychometrics, or the science of benchmarking
Open Source & Community: A track record of open-source contributions to evaluation tools, datasets, or benchmarks
Domain Knowledge: Familiarity with language model post-training, RLHF, or the nuances of LLM failure modes