This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
OpenAI’s Inference team powers the deployment of our most advanced models - including our GPT models, 4o Image Generation, and Whisper - across a variety of platforms. Our work ensures these models are available, performant, and scalable in production, and we partner closely with Research to bring the next generation of models into the world. We're a small, fast-moving team of engineers focused on delivering a world-class developer experience while pushing the boundaries of what AI can do. We’re expanding into multimodal inference, building the infrastructure needed to serve models that handle image, audio, and other non-text modalities. These workloads are inherently more heterogeneous and experimental, involving diverse model sizes and interactions, more complex input/output formats, and tighter coordination with product and research. We’re looking for a software engineer to help us serve OpenAI’s multimodal models at scale. You’ll be part of a small team responsible for building reliable, high-performance infrastructure for serving real-time audio, image, and other MM workloads in production. This work is inherently cross-functional: you’ll collaborate directly with researchers training these models and with product teams defining new modalities of interaction. You'll build and optimize the systems that let users generate speech, understand images, and interact with models in ways far beyond text.
Job Responsibility:
Design and implement inference infrastructure for large-scale multimodal models
Optimize systems for high-throughput, low-latency delivery of image and audio inputs and outputs
Enable experimental research workflows to transition into reliable production services
Collaborate closely with researchers, infra teams, and product engineers to deploy state-of-the-art capabilities
Contribute to system-level improvements including GPU utilization, tensor parallelism, and hardware abstraction layers
Requirements:
Experience building and scaling inference systems for LLMs or multimodal models
Worked with GPU-based ML workloads and understand the performance dynamics of large models, especially with complex data like images or audio
Enjoy experimental, fast-evolving work and collaborating closely with research
Comfortable dealing with systems that span networking, distributed compute, and high-throughput data handling
Familiarity with inference tooling like vLLM, TensorRT-LLM, or custom model parallel systems
Own problems end-to-end and are excited to operate in ambiguous, fast-moving spaces
Nice to have:
Experience working with image generation or audio synthesis models in production
Exposure to distributed ML training or system-efficient model design
What we offer:
Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
401(k) retirement plan with employer match
Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
Mental health and wellness support
Employer-paid basic life and disability coverage
Annual learning and development stipend to fuel your professional growth
Daily meals in our offices, and meal delivery credits as eligible
Relocation support for eligible employees
Additional taxable fringe benefits, such as charitable donation matching and wellness stipends