This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At myPOS, we're all about helping businesses grow and get paid. We make payments simple, smart, and accessible for everyone, but we're more than just payment solutions - myPOS is a partner in growth. From free multicurrency accounts to powerful e-commerce tools, we're here to support business owners of all sizes and everyone out there who dreams of starting their own business. As we are expanding our team, we're looking for AI engineer to help us make a real difference in the Fintech industry. Ready to join us and shape the future of payments? Let's make it happen! About the role: As an AI Engineer, you'll design, build, and operate production-grade AI/ML systems - from agentic applications and MCP-powered tools to the underlying MLOps infrastructure and data foundations that make them reliable at scale. You'll sit at the intersection of applied AI, DevOps/MLOps, and data engineering, taking systems from prototype to production and keeping them healthy once they're there. Your work will power intelligent products and internal automation across the company, and will help shape how the organisation safely adopts AI at scale.
Job Responsibility:
Design and build agentic systems: single agents, multi-agent orchestration, and sub-agent patterns for complex workflows (planner/executor, supervisor/worker, hierarchical task decomposition)
Develop MCP (Model Context Protocol) servers and tools that expose internal systems, datasets, and actions to LLM-powered applications in a safe, governed, and reusable way
Implement retrieval systems (RAG, hybrid search, graph-based retrieval) including chunking, embedding, re-ranking, and context-assembly strategies
Build and maintain MLOps automation: CI/CD for models and agents, environment management, artifact handling, and versioning of prompts, models, data, and code
Implement observability for AI systems: tracing, token/latency/cost metrics, quality and drift monitoring, alerting, and incident response
Build and maintain data pipelines for ingestion, transformation, and export across multiple sources and destinations
Expose well-modelled, governed datasets and APIs that agents, tools, and downstream consumers can rely on
Ensure secure data handling and compliance with relevant data protection standards and internal policies
Contribute to documentation, standards, and continuous improvement of the data platform and engineering processes
Requirements:
Bachelor's degree in Computer Science, Engineering, Mathematics, or a related technical field (or equivalent practical experience)
5+ years of Data or ML Engineering experience, with at least 3 years shipping AI or ML systems to production
Hands-on experience building agentic applications with frameworks such as LangGraph, LlamaIndex, CrewAI, or the Anthropic/OpenAI Agents SDKs — including tool use, memory, and multi-step reasoning patterns
Practical experience with MCP or comparable tool/function-calling protocols
comfortable designing tool schemas and sub-agent boundaries
Experience with RAG architectures, vector stores (e.g. pgvector, Pinecone, Weaviate), and embedding models
Familiarity with at least one major cloud provider (GCP, AWS, Azure) and deploying data solutions in the cloud
Strong DevOps fundamentals: CI/CD (GitHub Actions, Cloud Build, or similar), IaC (Terraform), containerisation (Docker), and orchestration (Kubernetes or serverless equivalents)
Comfortable building and maintaining data pipelines with orchestrators (Airflow/Composer, Dagster) and distributed engines (Spark, BigQuery)
Strong troubleshooting mindset: ability to debug issues across data, infra, pipelines, and deployments
Collaborative mindset and clear communication across engineering, analytics, and business stakeholders
Nice to have:
Strong GCP experience and ecosystem knowledge: Vertex AI (Agent Engine, Model Garden, Pipelines, Endpoints), Cloud Run, BigQuery, Composer, Dataproc, Cloud Run, Dataplex, Cloud Storage
Experience with data governance concepts: access control, retention, data classification, auditability, and compliance standards
Model monitoring experience: drift detection, data quality issues, performance degradation, bias checks, and alerting strategies
What we offer:
Excellent compensation package
25 days annual paid leave (+1 day per year up to 30)
Full 'Luxury' package health insurance including dental care and optical glasses
Meal vouchers of 102.26 EUR per month
Fully covered Multisport card
Fully covered public transport pass for Sofia
Free coffee, snacks and drinks at the office
Annual salary reviews, promotions and performance bonuses
myPOS Academy for upskilling and training
Unlimited access to courses on LinkedIn Learning
Annual individual training and development budget
Refer a friend bonus
Teambuilding, social activities and networks on a multi-national level