This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We're looking for an ML infrastructure engineer to bridge the gap between research and production at Runway. You'll work directly with our research teams to productionize cutting-edge generative models—taking checkpoints from training to staging to production, ensuring reliability at scale, and building the infrastructure that enables fast iteration. You'll be embedded within research teams, providing platform support throughout the entire model development lifecycle. Your work will directly impact how quickly we can ship new models and features to millions of users.
Job Responsibility:
Productionize model checkpoints end-to-end: from research completion to internal testing to production deployment to post-release support
Build and optimize inference systems for large-scale generative models running on multi-GPU environments
Design and implement model serving infrastructure specialized for diffusion models and real-time diffusion workflows
Add monitoring and observability for new model releases—track errors, throughput, GPU utilization, and latency
Embed with research teams to gather training data, run preprocessing scripts, and support the model development process
Explore and integrate with GPU inference providers (Modal, E2E, Baseten, etc.)
Requirements:
4+ years of experience running ML model inference at scale in production environments
Strong experience with PyTorch and multi-GPU inference for large models
Experience with Kubernetes for ML workloads—deploying, scaling, and debugging GPU-based services
Comfortable working across multiple cloud providers and managing GPU driver compatibility
Experience with monitoring and observability for ML systems (errors, throughput, GPU utilization)
Self-starter who can work embedded with research teams and move fast
Strong systems thinking and pragmatic approach to production reliability
Humility and open mindedness
Nice to have:
Experience building custom inference frameworks or serving systems
Deep understanding of distributed training and inference patterns (FSDP, data parallelism, tensor parallelism)
Ability to debug low-level issues: NCCL networking problems, CUDA errors, memory leaks, performance bottlenecks
Experience with diffusion models or video generation systems
Knowledge of real-time or latency-sensitive ML applications