This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Microsoft Azure AI Inference platform is the next generation cloud business positioned to address the growing AI market. We are on the verge of an AI revolution and have a tremendous opportunity to empower our partners and customers to harness the full power of AI responsibly. We offer a fully managed AI Inference platform to accelerate the research, development, and operations of AI powered intelligent solutions at scale. This team owns the hosting, optimization, and scaling the inference stack for all the Azure AI Foundary models including the latest and greatest from OpenAI, Grok, DeepSeek, and other OSS models. Do you want to join a team entrusted with serving all internal and external ML workloads, solve real world inference problems for state-of-the-art large language (LLM) and multi-modal Gen AI models from OpenAI and other model providers? We are already serving billions of inferences per day on the most cutting-edge AI scenarios across the industry. You will be joining the AI Core Inferencing team, influencing the overall product, driving new features and platform capabilities from preview to General Availability, and many exciting problems on the intersection of AI and Cloud. We’re looking for a passionate Software Engineer 2 to drive the design, optimization, and scaling of our inference systems. In this role, you’ll lead engineering efforts to ensure our largest models run efficiently in high-throughput, low-latency environments. You will get to work on and influence multiple levels of the AI Inference data plane stack.
Job Responsibility:
Design and implement core inference infrastructure for serving frontier AI models in production
Identify and drive improvements to end-to-end inference performance and efficiency of state-of-the-art LLMs and GenAI models from OpenAI, Anthropic and xAI hosted on AI Foundary
Design and implement efficient load scheduling and balancing strategies, by leveraging key insights and features of the model and workload
Scale the platform to support the growing inferencing demand and maintain high availability
Deliver critical capabilities required to serve the latest and greatest Gen AI models such as GPT5, Realtime audio, Sora, and enable fast time to market for them
Drive generic features to cater to the needs of customers such as GitHub, M365, Microsoft AI and third-party companies
Collaborate with our partners both internal and external
Embody Microsoft's Culture and Values
Requirements:
Bachelor’s degree in Computer Science or a related technical field AND 2+ years of technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, or Golang, OR equivalent experience
Ability to meet Microsoft, customer, and/or government security screening requirements for this role
Technical background with a solid foundation in software engineering principles, distributed computing, and system architecture
Experience working on high-scale, reliable online systems
Experience with real-time online services requiring low latency and high throughput
Experience working with Layer 7 (L7) network proxies and gateways
Knowledge of network architecture and concepts, including HTTP and TCP protocols, authentication, and session management
Knowledge and experience with OSS, Docker, Kubernetes, C++, Golang, or equivalent programming languages
Cross-team collaboration skills and the desire to collaborate in a team of researchers and developers