This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Public Sector ML team at Scale deploys advanced AI systems—including LLMs, agentic models, and multimodal pipelines—into mission-critical government environments. We build evaluation frameworks that ensure these models operate reliably, safely, and effectively under real-world constraints. As an ML Engineer, you will design, implement, and scale automated evaluation pipelines that help customers trust and operationalize advanced AI systems across defense, intelligence, and federal missions.
Job Responsibility:
Develop and maintain automated evaluation pipelines for ML models across functional, performance, robustness, and safety metrics, including LLM-judge–based evaluations
Design test datasets and benchmarks to measure generalization, bias, explainability, and failure modes
Build evaluation frameworks for LLM agents, including infrastructure for scenario-based and environment-based testing
Conduct comparative analyses of model architectures, training procedures, and evaluation outcomes
Implement tools for continuous monitoring, regression testing, and quality assurance for ML systems
Design and execute stress tests and red-teaming workflows to uncover vulnerabilities and edge cases
Collaborate with operations teams and subject matter experts to produce high-quality evaluation datasets
Requirements:
Experience in computer vision, deep learning, reinforcement learning, or NLP in production settings
Strong programming skills in Python
experience with TensorFlow or PyTorch
Background in algorithms, data structures, and object-oriented programming
Experience with LLM pipelines, simulation environments, or automated evaluation systems
Ability to convert research insights into measurable evaluation criteria
This role will require an active security clearance or the ability to obtain a security clearance
Nice to have:
Graduate degree in CS, ML, or AI
Cloud experience (AWS, GCP) and model deployment experience
Experience with LLM evaluation, CV robustness, or RL validation
Knowledge of interpretability, adversarial robustness, or AI safety frameworks
Familiarity with ML evaluation frameworks and agentic model design
Experience in regulated, classified, or mission-critical ML domains