This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are hiring AI Systems Engineers to help build that machinery. This role is for engineers who like consequential junctions: between training outputs and deployable artifacts, between runtime systems and safe release, between quality claims and evidence, and between ambitious AI plans and systems that can actually carry them.This is not a research role, and it is not a generic support role. It is an implementation-heavy, building-focused engineering role on a small team responsible for making in-house AI capabilities easier to package, evaluate, deploy, promote, operate, and improve. AI Platform Engineering exists to shorten the path from emerging AI capability to reliable production impact. We build the shared systems, standards, and delivery pathways that let in-house models and AI capability packages move from candidate state into observable, rollback-safe production operation. Our work sits at the junction between model development, runtime systems, evaluation, and delivery. We enable the broader AI Platform division by making it faster and safer to ship new capabilities, improve existing ones, and learn from production behavior. This is a new team. The systems, interfaces, and standards are still being shaped. The work is highly consequential, highly practical, and closely tied to the company's broader AI strategy. We are not building one-off demos or isolated launches. We are building the machinery by which a growing AI organization can repeatedly deliver real capability into production.
Job Responsibility:
Help design, build, and improve the systems that connect AI capability development to production reality
Improving how model and capability artifacts are packaged, versioned, promoted, and rolled back
Building or improving deployment and release pathways for AI-backed services
Enabling shadow-serving, staged rollout, and candidate-versus-incumbent comparison
Strengthening runtime behavior, observability, and debugging for model-backed systems
Building or automating evaluation systems that make release decisions evidence-based
Reducing bespoke coordination and strengthening the shared rails used by multiple AI teams
Requirements:
Bachelor's degree in Computer Science, Engineering, or equivalent related experience
2 to 6 years of professional software engineering experience, with a proven track record of shipping production infrastructure or real systems that matter
Experience in writing solid, maintainable production code and applying strong software engineering fundamentals to solve complex debugging challenges
Experience in operating within ambiguous, cross-functional environments where requirements evolve and interfaces are real
Expertise in building for reproducibility, operability, and rollout safety, focusing on the quality of change rather than just local implementation
Nice to have:
Experience with cloud infrastructure, containerized environments, managed ML platforms, or service orchestration systems
Experience with model serving, deployment systems, experiment tracking, artifact/version management, or ML lifecycle tooling
Experience with distributed systems, service platforms, search/relevance systems, internal enablement tooling, or production AI platforms
Experience with testing, benchmarking, experimentation systems, or evaluation frameworks that informed release decisions
Exposure to applied AI, speech, conversational systems, customer-facing workflows, or other production ML domains