This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Join us to turn AI ideas into real product features used by our customers worldwide. Work with a curious team building smarter tools for creative automation. At CHILI publish, we build a cloud platform that helps brands and agencies create, automate, and scale digital content with ease. AI and machine learning are becoming central to how we help our customers work smarter - from intelligent content automation to smart templating, recommendations, and beyond.
Job Responsibility:
Architect and maintain Vector DB solutions and RAG systems that power intelligent search, content suggestions, and automation features in our platform
Design and build scalable ML pipelines that go from experimentation to production without losing rigor or reliability
Bridge the gap between models and product - you won't just train models, you'll wire them into real user-facing features and backend services, end to end
Own MLOps practices across the team: versioning, monitoring, deployment pipelines, model drift detection, and continuous evaluation
Collaborate closely with product and engineering to translate fuzzy business problems into well-defined ML problems - and ship solutions that actually move the needle
Stay current with the fast-moving AI landscape and bring well-considered ideas on where we should invest next
Requirements:
At least 2 years of production experience building and operating Vector Databases (e.g. PGVector, Pinecone, Weaviate, Qdrant) and RAG architectures at scale
Hands-on experience with MLOps: model deployment, versioning, monitoring, CI/CD for ML, and infrastructure tooling (e.g. MLflow, Weights & Biases, SageMaker, or similar)
Strong full-stack development ability - you can build the API layer, hook it up to a frontend, and know enough about UX to make it intuitive for clients. You feel comfortable building API’s in Node (TypeScript) and train, evaluate, run scripts and models in Python
A solid grasp of LLM ecosystems - prompt engineering, fine-tuning trade-offs, embedding models, and how to build reliable, observable AI features in production, considering cost and performance
The ability to communicate clearly about functional and technical aspects to both engineers and non-engineers
Nice to have:
Background in data science or applied ML research - familiarity with model evaluation, experimentation design, and statistical thinking
Experience with cloud-native ML infrastructure (Azure, AWS or GCP ML- and DevOps tooling)
Exposure to content generation, document understanding, or creative (mar) tech - the domain we operate in