This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As an Embodied AI / Simulation Engineer, you will develop learning-based manipulation systems for humanoid and mobile manipulation platforms and ensure they transfer reliably into the real world. You will build the simulation, data, and training infrastructure needed to develop robust visuomotor policies and deploy them onto physical robotic systems operating in production environments. This role sits at the intersection of simulation, machine learning, and robotics execution. You will work closely with perception and controls teams to ensure learned policies operate safely, reliably, and effectively in closed-loop real-world conditions.
Job Responsibility:
Design and train learning-based manipulation systems for humanoid and mobile manipulation platforms
Develop and maintain high-fidelity digital twins using Isaac Sim, MuJoCo, or similar simulation tools
Implement and evaluate approaches including Action Chunking with Transformers (ACT), diffusion policies, behavior cloning, and Vision-Language-Action (VLA) models
Contribute to development of a Universal Manipulation Interface (UMI) abstraction layer
Build teleoperation-to-training data pipelines to enable scalable dataset generation
Design sim-to-real transfer strategies including domain randomization and system identification
Evaluate policy robustness, failure modes, and cross-task generalization
Partner with perception and controls teams to ensure stable closed-loop visuomotor policies
Deploy trained models onto real robotic systems and support on-hardware validation and debugging
Establish experimental rigor through structured evaluation, ablations, and performance tracking
Requirements:
Experience training embodied AI policies for real robotic systems
Strong understanding of sim-to-real challenges and real-world failure modes
Familiarity with transformer-based action models such as ACT
Experience with diffusion policies or other generative control approaches
Experience working with multi-modal inputs including vision, proprioception, and language
Proficiency in Python and deep learning frameworks such as PyTorch or JAX
Experience integrating learned policies with real-time control system
Strong experimental design, evaluation discipline, and attention to reproducibility
Ability to operate across research and deployment constraints without losing rigor
What we offer:
Equity in Formic
Competitive & Uncapped Commission Structure
Comprehensive Healthcare Coverage: Medical, dental, and vision insurance through Blue Cross Blue Shield and Unum, with 99% of employee premiums covered and 75% coverage for dependents
Additional Insurance Benefits: FSA and DCFSA, life insurance, short-term disability, and long-term disability through Unum, all 100% employer-paid
Employee Assistance Program (EAP)
Paid Parental Leave Program: Up to 12 weeks of paid parental leave
Company-sponsored 401(k)
Home Office Stipend: A one-time allowance for fully remote and hybrid employees