This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are hiring a founding group of engineers to kickstart this mission. As an AI Engineer on this team, you won’t be maintaining legacy systems; you’ll be building from the ground up. You will own the full spectrum of AI: crafting predictive ML models to find insights in our data warehouse and architecting GenAI systems that allow any employee to interact with enterprise knowledge through intelligent agents.
Job Responsibility:
Build the AI Platform: Design, deploy, and scale GenAI infrastructure on GCP Vertex AI (preferred) or AWS Bedrock
Build & Operate MCP Servers: Develop and maintain Model Context Protocol (MCP) servers that expose enterprise systems (databases, APIs, internal tools) as structured, AI-consumable capabilities
Design the Tool Layer: Build the tool registry and invocation framework that allows agents to interact with internal systems safely and reliably including schema design, access controls, and error handling
Engineer Agent-to-Agent Infrastructure: Architect the communication and coordination layer that allows specialized agents to delegate tasks, share context, and compose into larger autonomous workflows
Own the Retrieval Layer: Architect and operate RAG systems, including vector stores, embedding pipelines, chunking strategies, and retrieval evaluation frameworks
Establish AI Reliability: Instrument AI systems with logging, tracing, latency monitoring, and evaluation hooks so agents can be trusted, debugged, and improved in production
Requirements:
AI Infrastructure Experience: Professional experience building and deploying production-grade GenAI systems on GCP Vertex AI or AWS Bedrock including LLM APIs, agent frameworks, and RAG pipelines
Python Mastery: Deep proficiency in Python and common AI/ML libraries (e.g., LangChain, LlamaIndex, OpenAI SDK, Google Cloud AI SDK)
Agentic Systems: Hands-on experience building multi-step, tool-calling AI agents that operate reliably in production including tool schema design, structured outputs, and failure handling
MCP or Tool Protocol Experience: Familiarity with Model Context Protocol (MCP) or equivalent patterns for exposing enterprise resources to AI systems as callable, permissioned tools
Vector & Retrieval Systems: Experience with vector databases (Pinecone, pgvector, or similar) and embedding-based retrieval at scale
Technical Foundation: B.S. in Computer Science, Engineering, or a related technical field
The Pioneer Mindset: A self-starter who is excited to be part of a "first-of-its-kind" team and thrives in environments where you are building the playbook
Nice to have:
Multi-Agent Architectures: Experience designing orchestrator/subagent patterns, agent handoff protocols, or systems where multiple AI models coordinate to complete complex tasks (e.g., using frameworks like LangGraph, CrewAI, or AutoGen)
AI Observability: Experience with LLM evaluation frameworks, prompt regression testing, or agent trace monitoring (e.g. custom eval pipelines)
Data Warehouse Integration: Familiarity with SQL and pulling data from Snowflake or BigQuery to ground AI agents in live business context
Appian Exposure: Experience with the Appian platform or interest in integrating AI agents into low-code automation workflows
Predictive ML (Nice to Have): Some familiarity with ML fundamentals is a plus, but not a focus of this role
What we offer:
health coverage
Employee Assistance Program (EAP) with free mental health support