This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Design and build agentic systems: single agents, multi-agent orchestration, and sub-agent patterns for complex workflows (planner/executor, supervisor/worker, hierarchical task decomposition)
Develop MCP (Model Context Protocol) servers and tools that expose internal systems, datasets, and actions to LLM-powered applications in a safe, governed, and reusable way
Implement retrieval systems (RAG, hybrid search, graph-based retrieval) including chunking, embedding, re-ranking, and context-assembly strategies
Build and maintain MLOps automation: CI/CD for models and agents, environment management, artifact handling, and versioning of prompts, models, data, and code
Implement observability for AI systems: tracing, token/latency/cost metrics, quality and drift monitoring, alerting, and incident response
Build and maintain data pipelines for ingestion, transformation, and export across multiple sources and destinations
Expose well-modelled, governed datasets and APIs that agents, tools, and downstream consumers can rely on
Ensure secure data handling and compliance with relevant data protection standards and internal policies
Contribute to documentation, standards, and continuous improvement of the data platform and engineering processes
Requirements:
Bachelor's degree in Computer Science, Engineering, Mathematics, or a related technical field (or equivalent practical experience)
5+ years of Data or ML Engineering experience, with at least 3 years shipping AI or ML systems to production
Hands-on experience building agentic applications with frameworks such as LangGraph, LlamaIndex, CrewAI, or the Anthropic/OpenAI Agents SDKs - including tool use, memory, and multi-step reasoning patterns
Practical experience with MCP or comparable tool/function-calling protocols
comfortable designing tool schemas and sub-agent boundaries
Experience with RAG architectures, vector stores (e.g. pgvector, Pinecone, Weaviate), and embedding models
Familiarity with at least one major cloud provider (GCP, AWS, Azure) and deploying data solutions in the cloud
Strong DevOps fundamentals: CI/CD (GitHub Actions, Cloud Build, or similar), IaC (Terraform), containerisation (Docker), and orchestration (Kubernetes or serverless equivalents)
Comfortable building and maintaining data pipelines with orchestrators (Airflow/Composer, Dagster) and distributed engines (Spark, BigQuery)
Strong troubleshooting mindset: ability to debug issues across data, infra, pipelines, and deployments
Collaborative mindset and clear communication across engineering, analytics, and business stakeholders
Nice to have:
Strong GCP experience and ecosystem knowledge: Vertex AI (Agent Engine, Model Garden, Pipelines, Endpoints), Cloud Run, BigQuery, Composer, Dataproc, Cloud Run, Dataplex, Cloud Storage
Experience with data governance concepts: access control, retention, data classification, auditability, and compliance standards
Model monitoring experience: drift detection, data quality issues, performance degradation, bias checks, and alerting strategies
What we offer:
Vibrant international team operating in hi-tech environment
Annual salary reviews, promotions and performance bonuses
myPOS Academy for upskilling and training
Unlimited access to courses on LinkedIn Learning
Annual individual training and development budget
Refer a friend bonus as we know that working with friends is fun
Teambuilding, social activities and networks on a multi-national level
Excellent compensation package
25 days annual paid leave (+1 day per year up to 30)
Full Luxury package health insurance including dental care and optical glasses