This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad — to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all — consumers, businesses, developers — so that everyone can realize its benefits. We’re looking for an experienced Site Reliability Engineer (SRE) to join our infrastructure team. In this role, you’ll blend software engineering and systems engineering to keep our large-scale distributed AI infrastructure reliable and efficient. You’ll work closely with ML researchers, data engineers, and product developers to design and operate the platforms that power training, fine-tuning, and serving generative AI models.
Job Responsibility:
Reliability & Availability: Ensure uptime, resiliency, and fault tolerance of AI model training and inference systems
Observability: Design and maintain monitoring, alerting, and logging systems to provide real-time visibility into model serving pipelines and infra
Performance Optimization: Analyze system performance and scalability, optimize resource utilization (compute, GPU clusters, storage, networking)
Automation & Tooling: Build automation for deployments, incident response, scaling, and failover in hybrid cloud/on-prem CPU+GPU environments
Incident Management: Lead on-call rotations, troubleshoot production issues, conduct blameless postmortems, and drive continuous improvements
Security & Compliance: Ensure data privacy, compliance, and secure operations across model training and serving environments
Collaboration: Partner with ML engineers and platform teams to improve developer experience and accelerate research-to-production workflows
Requirements:
4+ years of experience in Site Reliability Engineering, DevOps, or Infrastructure Engineering roles
Strong proficiency in Kubernetes, Docker, and container orchestration
Knowledge of CI/CD pipelines for Inference and ML model deployment
Hands-on experience with public cloud platforms like Azure/AWS/GCP and infrastructure-as-code
Expertise in monitoring & observability tools (Grafana, Datadog, OpenTelemetry, etc.)
Strong programming/scripting skills in Python, Go, or Bash
Solid knowledge of distributed systems, networking, and storage
Nice to have:
Experience running large-scale GPU clusters for ML/AI workloads
Familiarity with ML training/inference pipelines
Experience with high-performance computing (HPC) and workload schedulers (Kubernetes operators)
Background in capacity planning & cost optimization for GPU-heavy environments
What we offer:
Competitive compensation, equity options, and comprehensive benefits