This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking highly motivated interns to research, explore, and evaluate cutting-edge AI-driven approaches for robot localization/map construction, perception, motion planning, scenario simulation, and data engineering. The role will involve hands-on experimentation, algorithm development, and integration of multi-modal sensor data to advance autonomous robotic systems. The Robotics Software team is developing the next generation of autonomous robotic systems, focusing on autonomous mobile robots (AMRs) and intelligent robotic platforms. We develop full-stack robotics capabilities—from perception and planning to control and system integration—bringing innovative, real-world autonomous solutions to the future of the work. We are looking for a self-motivated intern to prototype the development of AI-driven sense-plan-act architecture that supports the development, testing, and validation of autonomous robotic systems in manufacturing plants.
Job Responsibility:
Design and implement high-precision localization methods using camera, LiDAR, wheel encoder and inertial sensors.
Develop scalable and real-time localization module optimized for autonomous robotic systems.
Create engineering specifications and test procedures to ensure system compliance.
Evaluate and benchmark the performance of systems.
Review the state-of-the-art in camera- and LiDAR-based algorithms.
Troubleshoot using strong knowledge of probabilistic estimation, sensor fusion, and real-time system implementation.
Adjust and fine-tune system parameters to improve accuracy and robustness.
Evaluate and test LiDAR-based localization repositories.
Investigate Gaussian splatting localization pipelines and assess feasibility for embedded platforms.
Explore machine-learning techniques for feature point correspondence between image frames.
Implement and benchmark place recognition algorithms using computer vision.
Integrate dynamic object handling into localization workflows.
Develop multi-agent map-building and construction processes (offboard).
Design sensor fusion strategies for heterogeneous modalities (e.g., 3D LiDAR, 2D LiDAR, monocular camera, IMU, wheel odometer).
Apply post-processing optimization algorithms (e.g., factor graph and pose graph).
Create, curate, and manage datasets for training AI models.
Ensure data quality and diversity for robust algorithm development.
Upgrade the existing simulation environment to support generation of realistic 3D LiDAR data and photorealistic image rendering for advanced perception testing.
Design and implement adversarial scenarios to identify potential safety vulnerabilities and enhance overall system robustness.
Develop perception solutions leveraging joint representation of Bird’s Eye View (BEV) and DETR-based object detection using multi-modality inputs.
Enhance robustness in perception pipelines for dynamic environments.
Research and implement denoising diffusion-based motion planning algorithms.
Reinforcement learning in simulation engine to improve path generation policy.
Evaluate performance and scalability of AI-driven planning approaches in real-world scenarios.
Requirements:
Currently enrolled in a Masters Degree and completed at least 1 year of Masters in Robotics, Computer Science, Electrical/Mechanical Engineering, or related technical fields.
Proficiency in C++ or Python.
Adhere to continuous development and deployment practices in robotic software development.
Expertise in one or more of the technical areas: Camera- and LiDAR-based localization algorithms, statistical estimation theory, and practices such as pose graph and factor graph optimization and implementation.
Understanding state-of-the-art solutions in place recognition for addressing loop-closure detection issues.