This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At General Motors, our product teams are redefining mobility. Through a human-centered design process, we create vehicles and experiences that are designed not just to be seen, but to be felt. We’re turning today’s impossible into tomorrow’s standard —from breakthrough hardware and battery systems to intuitive design, intelligent software, and next-generation safety and entertainment features. Every day, our products move millions of people as we aim to make driving safer, smarter, and more connected, shaping the future of transportation on a global scale.
Job Responsibility:
End-to-End Model Lifecycle: Own the design, training, validation, and deployment of deep learning models for core perception tasks such as: 3D Object Detection and Tracking (vehicles, pedestrians, cyclists)
Real-time map detection of the drivable world (lanes, road boundaries, traffic signs)
Multi-Modal Sensor Fusion (Camera, LiDAR, Radar)
Production Pipeline: Build and scale the ML training infrastructure, including data mining and loading, multi-stage training and evaluation, to streamline model development
Performance Optimization: Improve model performance through data iterations, parameter tunings, training strategy and architecture updates to produce reliable models that meet and the strict real-time, low-latency requirements on the vehicle's embedded hardware
Model Debugging: Conduct rigorous, data-driven analysis to identify, debug, and resolve performance degradations and failures, specifically targeting long-tail and adversarial scenarios (e.g., adverse weather, sensor noise, occlusions)
Metric Definition: Define and implement robust model-level metrics to aid model development
System Integration: Work closely with the Safety, Systems, and other engineering functions to integrate Perception outputs
Requirements:
BS, MS or PhD in Computer Science, Machine Learning, Robotics, or a related quantitative field
5+ years of professional experience with a focus on Computer Vision, Deep Learning, and Perception in a production environment
Deep hands-on experience with modern deep learning frameworks (e.g., PyTorch or TensorFlow) for training, experimentation, and debugging complex DNNs
Proven experience working with and fusing data from multiple sensor modalities (Camera, LiDAR, and/or Radar)
Practical experience deploying and optimizing ML models for resource-constrained, real-time embedded systems
Demonstrated ability to drive model improvements through large-scale data analysis, error logging, and data curation
Nice to have:
Expertise with Transformer-based models for 3D detection, tracking, and scene understanding
Technical leadership experience, including mentoring junior engineers and leading major feature development from concept to launch