This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
This role sits within a dedicated AI Deployment team responsible for ensuring that AI models, agents, and orchestration layers are deployed, secured, observed, and operated reliably at scale. You will work in a high‑trust environment that values autonomy, engineering rigor, and operational excellence while delivering AI capabilities used by financial institutions worldwide. The Platform Engineer role is explicitly focused on the deployment and operation of AI systems. You will own the infrastructure, automation, reliability, and performance of AI platforms supporting model training, fine‑tuning, inference, prompt execution, agent workflows, and integrations with banking systems. Your mission is to ensure AI services remain available, secure, observable, and scalable in production.
Job Responsibility:
Design, build, and operate infrastructure that supports the full AI lifecycle, including data ingestion pipelines, and fine‑tuning environments, inference services, prompt and agent execution, and orchestration layers
Deploy and manage AI workloads using infrastructure‑as‑code, ensuring reproducible and auditable environments for development, testing, and production
Automate provisioning of compute, storage, networking, and accelerators used by AI systems to minimize manual intervention and operational risk
Operate and monitor AI platforms by measuring availability, latency, throughput, model performance, drift, and resource utilization
Integrate AI services with internal and external systems
Implement controls for secure model deployment, secrets management, access control, data handling, and encrypted communications across AI workflows
Lead incident response and root‑cause analysis for AI service degradation, model failures, or infrastructure outages impacting production
Plan and execute controlled deployments and upgrades of AI platforms, models, and orchestration components, including out‑of‑hours releases when required
Requirements:
2–3 years of experience in Platform Engineering, DevOps, or MLOps with direct responsibility for deploying and operating AI or data‑intensive systems
Strong practical experience with Infrastructure as Code for AI environments (Terraform, CloudFormation, or equivalent)
Experience operating cloud‑based AI platforms and services
AWS experience is required
Proficiency in scripting or programming languages used in AI platforms such as Python, Shell, Java, or similar
Hands‑on experience building CI/CD pipelines for AI models, inference services, prompts, and agent workflows
Strong understanding of networking and Linux administration as applied to high‑availability AI systems
Experience with configuration management and automation tools used to manage AI infrastructure
Security‑focused mindset with experience applying controls for model security, data protection, cryptography, and secrets management
Ability to understand AI system behavior, business logic, and model constraints when operating production services
Financial services or regulated‑industry experience is advantageous
Availability for on‑call support to maintain uptime and reliability of production AI systems
Nice to have:
Financial services or regulated‑industry experience is advantageous