This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Your role is to push embodied intelligence toward human-level dexterity by working at the level where sensing, actuation, learning, and physical structure form a single closed loop. This role exists because dexterity is not a policy problem. It is a system problem. Intelligence in a humanoid does not live in a network alone—it emerges from how perception is structured, how actions are generated and constrained, and how the body itself participates in learning. Your work targets that loop directly. You work at the frontier of foundation models and multimodal learning, but you are not bound to existing architectures. You are expected to break with them when performance gaps demand it. You let failures in real systems—latency, instability, brittleness, lack of contact understanding—guide what models, representations, and interfaces need to exist next. Neuroscience and biological motor control are reference points, not inspiration slides. You explicitly embrace the difference between models built for language and models that must operate in tight sensor–actuator loops. You understand that embodiment imposes constraints—bandwidth, delays, noise, compliance—that fundamentally shape how intelligence must be structured. This role sits inside an interdisciplinary lab, embedded with hardware, sensing, biomechanics, and prototyping teams, while having direct access to 1X’s world-class AI organization. The loop between hypothesis, hardware change, experiment, and learning is intentionally short. You are expected to use that loop to unlock capabilities that cannot be reached by model-centric work alone.
Job Responsibility:
Develop learning systems for embodied intelligence that operate in tight sensor–actuator loops
Drive progress toward human-level dexterity by addressing system-level limitations, not just model performance
Co-design sensing, actuation interfaces, and learning architectures with hardware and robotics teams
Use real-world experiments to expose performance gaps and guide architectural decisions
Break with existing model or control paradigms when they block progress toward physical capability
Translate insights from experiments into changes across models, representations, sensors, and actuation
Requirements:
PhD or equivalent depth of contribution in machine learning, robotics, control, or a closely related field
Clear record of excellence in AI research (e.g. influential publications, widely adopted methods, or deployed systems)
Demonstrated, hands-on contributions to sensing and/or actuation systems, not just downstream learning
Substantial experience working with real robotic hardware in closed-loop settings
Proven ability to reason across abstraction layers—from learning objectives and representations down to physical interaction and dynamics
Evidence of work that advanced system capability, not just algorithmic benchmarks
Nice to have:
Prior work on dexterous manipulation, tactile sensing, or whole-body control
Experience combining learning with custom hardware or novel sensing modalities
Familiarity with biological motor control or neuroscience as engineering reference systems