This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a VLA Research Scientist on the Atlas team, you will architect, train, and deploy the large-scale behavior models that enable high-performance dexterous manipulation on Atlas. Your work will focus on building models that take multimodal input (vision, language, task context, robot state), and produce grounded actions that generalize across manipulation tasks, embodiments, and environments. You will design large-scale behavior cloning and imitation learning pipelines, build hierarchical skill systems, and integrate learned components into a complex, real-world robotic system. You’ll collaborate closely with robotics, controls, and software teams, and rapidly test your work on state-of-the-art humanoid hardware.
Job Responsibility:
Architect and train end-to-end VLA and Large Behavior Models for mobile manipulation on Atlas
Build large-scale imitation learning pipelines that learn from human demonstrations, teleoperation, and simulation data
Develop policies capable of few-shot generalization across diverse manipulation tasks
Create hierarchical behavior systems that combine learned skills into long-horizon behaviors
Integrate your models into Atlas’s autonomy stack in collaboration with controls and platform teams
Deploy, debug, and iterate your models directly on physical hardware
Write high-quality, maintainable Python and C++ code that fits into a large production codebase
Requirements:
MS with 3+ years of experience or PhD in Machine Learning, Robotics, Computer Science, or related fields
Prior experience training and deploying learned policies for complex behaviors in robots or simulated characters