This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Training Performance Engineer, you’ll drive efficiency improvements across our distributed training stack. You’ll analyze large-scale training runs, identify utilization gaps, and design optimizations that push the boundaries of throughput and uptime. This role blends deep systems understanding with practical performance engineering — analyzing GPU kernel performance, collective communication throughput, investigating I/O bottlenecks, and sharding our models so we can train them at massive scale. You’ll help ensure that our clusters are running at peak performance, enabling OpenAI to train larger, more capable models with the same compute budget.
Job Responsibility:
Profile end-to-end training runs to identify performance bottlenecks across compute, communication, and storage
Optimize GPU utilization and throughput for large-scale distributed model training
Collaborate with runtime and systems engineers to improve kernel efficiency, scheduling, and collective communication performance
Implement model graph transforms to improve end to end throughput
Build tooling to monitor and visualize MFU, throughput, and uptime across clusters
Partner with researchers to ensure new model architectures scale efficiently during pre-training
Contribute to infrastructure decisions that improve reliability and efficiency of large training jobs
Requirements:
Love optimizing performance and digging into systems to understand how every layer interacts
Have strong programming skills in Python and C++ (Rust or CUDA a plus)
Have experience running distributed training jobs on multi-GPU systems or HPC clusters
Enjoy debugging complex distributed systems and measuring efficiency rigorously
Have exposure to frameworks like PyTorch, JAX, or TensorFlow and an understanding of how large-scale training loops are built
Are comfortable collaborating across teams and translating raw profiling data into practical engineering improvements
Nice to have:
Familiarity with NCCL, MPI, or UCX communication libraries
Experience with large-scale data loading and checkpointing systems
Prior work on training runtime, distributed scheduling, or ML compiler optimization
What we offer:
Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
401(k) retirement plan with employer match
Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
Mental health and wellness support
Employer-paid basic life and disability coverage
Annual learning and development stipend to fuel your professional growth
Daily meals in our offices, and meal delivery credits as eligible
Relocation support for eligible employees
Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided