This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Design, implement, and maintain core autonomy modules that integrate sensing, perception, state estimation, mapping, and planner interfaces into a cohesive real-time system
Develop high-performance computer vision pipelines (classical + AI-based) for detection, segmentation, tracking, and scene understanding, ensuring reliable operation on embedded hardware
Build multimodal perception systems that fuse camera, LiDAR, radar, and IMU data into accurate, navigation-ready environment representations
Deploy, optimize, and maintain autonomy software on embedded platforms (Jetson AGX/Orin), including TensorRT optimisation, cross-compilation, CUDA acceleration, and performance tuning for real-time execution
Own sensor bring-up, configuration, calibration, and synchronization (camera, LiDAR, radar, IMU, GPS), ensuring accurate and stable data for downstream modules
Ensure system-level robustness and safety by maintaining strict latency budgets, deterministic behaviour, numerical stability, and fall-back mechanisms for degraded sensing conditions
Conduct field trials, capture datasets, analyse system performance, and drive iterative improvements across sensing, perception, fusion, and planning layers
Debug deep autonomy stack issues including timing mismatches, calibration drift, concurrency conflicts, synchronization faults, and hardware–software integration challenges
Build deployment-ready autonomy systems using ROS/ROS2, Docker, systemd services, and reproducible build pipelines tailored for embedded platforms
Collaborate with mechanical, electronics, and systems teams to align autonomy software capabilities with real-world hardware constraints and vehicle dynamics
Contribute to autonomy architecture evolution, influencing design decisions, modularisation strategy, safety mechanisms, and long-term capability roadmap
Requirements:
Bachelor’s degree in Robotics, Computer Science, Mechatronics or related field
Strong proficiency in modern C++ (14/17/20) and Python
Deep understanding of computer vision fundamentals (feature-based vision, geometric methods, multi-view geometry) and AI-based perception using PyTorch
Practical experience deploying and optimising perception models on embedded GPU platforms (Jetson Xavier/Orin or similar)
Hands-on expertise with Triton, TensorRT, mixed-precision inference, Numba-JIT, CUDA kernels, and real-time optimisation techniques
Strong command of ROS/ROS2, TF transforms, message passing, node graph architecture, and middleware integration patterns
Extensive experience with robotics sensor integration including RGB/stereo/depth cameras, LiDAR, radar, IMUs, and GPS—covering calibration (intrinsic/extrinsic), synchronization, timestamps, and data integrity
Knowledge of core autonomy concepts: mapping, costmap generation, scene representation, obstacle detection, and planner interfacing
Solid grounding in Linux systems, multithreading, memory optimisation, real-time constraints, and system-level debugging workflows
Experience with Docker, cross-compilation toolchains, embedded deployment pipelines, and CI/CD systems for robotics software
Familiarity with simulation tools (Gazebo, CARLA, Isaac Sim)
Ability to troubleshoot complex issues across perception, fusion, hardware interfaces, timing, concurrency, and algorithmic edge cases
Strong understanding of coordinate frames, transforms, camera models, rigid-body geometry, and numerical optimisation methods
Experience using logging frameworks, telemetry tools, performance profilers, and methods for long-duration stability testing
Nice to have:
Master’s or PhD in Robotics, Autonomous Systems, AI/ML, Computer Vision, or Control Systems is preferred