This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
General Motors is a global leader in advanced driver assistance. With Super Cruise hands-free technology in more than 500,000 Super Cruise-equipped vehicles on the road, and over 700 million hands‑free miles driven, GM is proving that automation can be trusted, intuitive, and helpful. GM has the global reach to bring cutting‑edge advances to everyday drivers at unprecedented scale. Join us to help deliver the next generation of safe and delightful personal autonomous vehicle experiences. The Evaluation team builds and evolves the evaluation ecosystem that powers developing and scaling GM’s autonomous driving technology. We develop metrics, automated workflows, and analysis approaches that enable data-driven decisions across AV development and verification. Partnering with Autonomy, Simulation, Systems, and Safety teams, we act as system-level integrators and arbiters of end-to-end AV quality. We own large scale test scenario libraries, continuous evaluation pipelines, and critical risk assessment and release gating components, treating road testing, data mining, training, and metrics as first-class use cases in a unified analytics framework. By joining this team, you will help shape GM’s core evaluation platforms, turn system-level results into clear feedback, and help accelerate validated AV deployment at scale.
Job Responsibility:
Define the strategy and architecture for metrics and analyses to evaluate autonomous driving software performance across the autonomy stack
Lead cross-functional efforts with autonomy, systems engineering, simulation, and data teams to embed evaluation into development workflows and release decisions
Invent and drive new statistical and ML methods, and ML introspection techniques, to quantify performance, detect regressions, and reveal patterns of system behavior at scale
Own and refine key AV evaluation metrics and KPIs used for readiness and safety decisions
synthesize and present results and tradeoffs to stakeholders
make insights readily available to partner teams through interactive dashboards
Requirements:
7+ years applied experience with robotics or autonomous systems software, spanning multiple subsystems from perception through planning and control of the vehicle
3+ years leading evaluation of complex dynamic systems using numerical and ML approaches on large-scale time series data
Proficiency developing Python in production team environments
strong ability to work in large C++ autonomy codebases
Proven cross-team technical leadership, including defining strategies adopted by multiple teams and influencing system and architecture decisions
PhD, Masters, or Bachelor’s degree in Computer Science, Robotics, Mechanical or Aerospace Engineering, Machine Learning, or a related field
Nice to have:
Experience in autonomous driving or high-stakes field robotics
designing, running, and interpreting large-scale simulation and field experiments
Deep familiarity with statistical modeling, experimental design, and hypothesis testing for autonomy evaluation
command of Pandas, NumPy, SciPy, and visualization libraries
Proficiency in C++ and SQL, and experience shaping logging, data schemas, and evaluation pipelines for large-scale autonomy testing
Experience working with ROS or other IPC, robotics stack logging, and with large-scale experiment databases, including designing or scaling evaluation platforms
Prior development with computational geometry, linear algebra, PyTorch, and machine learning
Background in modeling agent interaction and owning or designing release gating criteria and processes for autonomy systems