This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
LMArena is seeking a variety of Machine Learning Scientist to help advance how we evaluate and understand AI models. You’ll help design and analyse experiments that uncover what makes models useful, trustworthy and capable through human preference signals. Your work will contribute to the scientific foundations of understanding AI at scale. This role is deeply interdisciplinary. You’ll work closely with engineers, product teams, marketing and the broader research community to develop new methods for comparing models, analyzing preference data, and disentangling performance factors like style, reasoning, and robustness. Your work will inform both the public leaderboard and the tools we provide to model developers.
Job Responsibility:
Design and conduct experiments to evaluate AI model behavior across reasoning, style, robustness, and user preference dimensions
Develop new metrics, methodologies, and evaluation protocols that go beyond traditional benchmarks
Analyze large-scale human voting and interaction data to uncover insights into model performance and user preferences
Collaborate with engineers to implement and scale research findings into production systems
Prototype and test research ideas rapidly, balancing rigor with iteration speed
Author internal reports and external publications that contribute to the broader ML research community
Partner with model providers to shape evaluation questions and support responsible model testing
Contribute to the scientific integrity and transparency of the LMArena leaderboard and tools
Requirements:
PhD or equivalent research experience in Machine Learning, Natural Language Processing, Statistics, or a related field
Strong understanding of LLMs and modern deep learning architectures (e.g., Transformers, diffusion models, reinforcement learning with human feedback)
Proficiency in Python and ML research libraries such as PyTorch, JAX, or TensorFlow
Demonstrated ability to design and analyze experiments with statistical rigor
Experience publishing research or working on open-source projects in ML, NLP, or AI evaluation
Comfortable working with real-world usage data and designing metrics beyond standard benchmarks
Ability to translate research questions into practical systems and collaborate across engineering and product teams
Passion for open science, reproducibility, and community-driven research
Nice to have:
Hands-on experience training large-scale models, including reward models, preference models, and fine-tuning LLMs with methods like RLHF, DPO, and contrastive learning
Strong foundation in ML and statistics, with a track record of designing novel training objectives, evaluation schemes, or statistical frameworks to improve model reliability and alignment
Fluent in the full experimental stack, from dataset design and large-batch training to rigorous evaluation and ablation, with an eye for what scales to production
Deeply collaborative mindset, working closely with engineers to productionize research insights and iterating with product teams to align modeling goals with user needs
What we offer:
Comprehensive health and wellness benefits, including medical, dental, vision, and additional support programs
The opportunity to work on cutting-edge AI with a small, mission-driven team
A culture that values transparency, trust, and community impact