This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a specialized Infrastructure Engineer to bridge the gap between our large data repositories, Cloud Platform and the rapidly evolving world of Large Language Models (LLMs). You will be responsible for building the 'plumbing' that allows our internal teams and external users to leverage AI effectively. This includes deploying Model Context Protocol (MCP) servers, building agentic execution environments, and scaling our internal Retrieval-Augmented Generation (RAG) architecture.
Job Responsibility:
Guide the architecture that will allow us to leverage AI tools with our large existing data stores and incoming streams of realtime intelligence
Work closely with other infrastructure engineers and software development teams to integrate AI tools into existing systems
Design, deploy, and maintain Model Context Protocol (MCP) servers to allow LLMs to securely interact with our internal databases, APIs, and external tooling
Build and orchestrate sandboxed, scalable environments (e.g., using Docker or specialized runtimes) where users can safely build and execute AI agents
Develop and manage the infrastructure for our internal RAG (Retrieval-Augmented Generation) pipeline, including vector database management (e.g., Pinecone, Weaviate, or pgvector) and automated embedding pipelines
Utilize Kubernetes (K8s) and Infrastructure as Code (Terraform/Pulumi) to deploy LLM-related tools, ensuring high availability and low latency for model inference and data retrieval
Implement strict guardrails for data privacy within LLM workflows, ensuring internal datasets remain secure while being accessible to authorized AI tools
Requirements:
5+ years of experience in DevOps, Platform Engineering, or SRE, with at least 1-2 years specifically focused on AI/ML infrastructure
Proven track record of building production-grade RAG pipelines or LLM-integrated applications
Thrives in 'day zero' environments where the tools and protocols (like MCP) are evolving weekly
Deep understanding of the security implications of LLMs (prompt injection, data leakage, and secure tool execution)
Experience working with substantial datasets (over 1bn objects, dozens or hundreds of TBs) and the challenges of leveraging AI tools with these data sets
Bachelor's degree or equivalent in computer science or related field