This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Enterprise Orchestration: Build and scale complex workflows using n8n (Vertex AI Pipelines experience is an advantage), ensuring seamless integration between LLMs (Gemini 2.5/3.0), internal systems and databases, and external APIs
Modern AI Deployment: Containerize and deploy models as scalable microservices using FastAPI, Docker, and GKE (Google Kubernetes Engine)
Advanced AI Orchestration: Design and implement sophisticated LLM workflows and multi-agent systems, potentially adopting frameworks like LangGraph, LangChain, AgentSkills, and MCP to create autonomous, tool-using solutions
End-to-End Agentic RAG Development: Lead the complete RAG lifecycle, managing everything from data cleansing, chunking, and embedding strategies to the final delivery of production-ready enterprise solutions
Model Optimization: Fine-tune Large Language Models (LLMs) and Small Language Models (SLMs like Gemma 3) using PEFT (LoRA/QLoRA) within Vertex AI Studio
Prompt Engineering: Develop and refine high-quality prompts to assist in the precise execution and optimization of AI solution logic
Market Intelligence: Maintain a proactive watch on the AI market to identify and adopt emerging technologies that provide a competitive advantage
Production Quality & Ops: Collaborate with MLOps/LLMOps teams to ensure seamless product delivery, implementing robust monitoring to maintain high production quality after go-live
Collaborative Solution Design: Work closely with the High-Code Engineering team and engage directly with business stakeholders to gather requirements and ensure AI solutions are perfectly aligned with user needs
Model Strategy & Evaluation: Perform rigorous model evaluations to decide which LLMs should be adopted or replaced within existing solutions to optimize performance and cost
User Interface Hosting: Host user interfaces with Gradio/Streamlit for users to interact with AI solutions
Requirements:
Vertex AI Suite: Hands-on experience with Vertex AI Agent Builder, Model Garden, and Vertex AI Search & Conversation
Gemini Ecosystem: Proficiency in utilizing the Gemini API (Pro, Flash, and Ultra) and Gemini Code Assist for accelerated development
Data Cleansing & Analytics: Hands-on experience in data cleansing with Python for grounding AI solutions in structured data
Agentic Frameworks: Expertise in LangGraph and LangChain for building stateful, multi-turn agentic applications
Model Engineering: Strong proficiency in Python (PyTorch, Scikit-learn, TensorFlow) for training, fine-tuning, and evaluating neural networks
Cloud: Proven experience with and a deep understanding of the Azure and GCP ecosystems
CI/CD/MLOps: Mastery of Vertex AI Pipelines, Azure DevOps, Argo Workflows, and container orchestration via Kubernetes (GKE)
Orchestration Tools: Expert-level knowledge of n8n, specifically using Python Code Nodes and custom API connectors
Real-time Data: Experience with CDC to ensure vector stores reflect real-time database changes
Experience building large-scale production AI workloads on Azure or Google Cloud
Nice to have:
Related AI Engineering certifications are beneficial