This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Generalist trains very large robot foundation models. This requires utilizing very large numbers of the latest generation GPU hardware and infrastructure (currently Nvidia) to run distributed training jobs and researcher experiments. We have extreme requirements on storage and data loading infrastructure that requires maximizing cloud infrastructure and custom solutions. You will also own inference infrastructure. For our robots this is a fleet of on-prem GPUs attached to robots that have extreme real-time and latency budgets in compute constrained environments.
Job Responsibility:
Owning our GPU compute fleets
Ensure our GPUs are easy for researchers to use and maximally utilized
Optimizing and improving ML data loading transport and storage in highly distributed fully utilized environments
Orchestration of robot inference fleets
Requirements:
Have managed large fleets of GPUs doing large-scale, long-term, highly distributed training runs or inference
Deep experience in Slurm or Kubernetes for ML workload orchestration
Have build high-scale ML data loaders and preparation systems
Deeply understand every layer of the ML hardware, storage, and networking stacks