This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a Machine Learning Engineer to help build and scale our machine learning infrastructure and workflows. At Duetto, you’ll take on the unique challenge of supporting the development, training, deployment, and monitoring of thousands of machine learning models, one for each hotel customer. You’ll work closely with data scientists, DevOps, and platform engineers to deliver robust, reusable tooling for the entire ML lifecycle—including training pipelines, inference APIs, feature workflows, and monitoring hooks—within our AWS-native environment. Your work will help us ensure that ML models are delivered quickly, reliably, and cost-effectively into production. This is an opportunity to build ML systems at scale, contribute to the design of modern ML infrastructure on top of AWS and Kubernetes, and shape the future of machine learning at Duetto.
Job Responsibility:
Develop, maintain, and scale machine learning pipelines for training, validation, and batch or real-time inference across thousands of hotel-specific models
Build reusable components to support model training, evaluation, deployment, and monitoring within a Kubernetes- and AWS-based environment
Partner with data scientists to translate notebooks and prototypes into production-grade, versioned training workflows
Implement and maintain feature engineering workflows, integrating with custom feature pipelines and supporting services
Collaborate with platform and DevOps teams to manage infrastructure-as-code (Terraform), automate deployment (CI/CD), and ensure reliability and security
Integrate model monitoring for performance metrics, drift detection, and alerting (using tools like Prometheus, CloudWatch, or Grafana)
Improve retraining, rollback, and model versioning strategies across different deployment contexts
Support experimentation infrastructure and A/B testing integrations for ML-based products
Requirements:
3+ years of experience in ML engineering or a similar role building and deploying machine learning models in production
Strong experience with AWS ML services (SageMaker, Lambda, EMR, ECR) for training, serving, and orchestrating model workflows
Hands-on experience with Kubernetes (e.g., EKS) for container orchestration and job execution at scale
Strong proficiency in Python, with exposure to ML/DL libraries such as TensorFlow, PyTorch, scikit-learn
Experience working with feature stores, data pipelines, and model versioning tools (e.g., SageMaker Feature Store, Feast, MLflow)
Familiarity with infrastructure-as-code and deployment tools such as Terraform, GitHub Actions, or similar CI/CD systems
Experience with logging and monitoring stacks such as Prometheus, Grafana, CloudWatch, or similar
Experience working in cross-functional teams with data scientists and DevOps engineers to bring models from research to production
Strong communication skills and ability to operate effectively in a fast-paced, ambiguous environment with shifting priorities
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.