This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a Cloud ML Engineer (AWS) within Data & Analytics GSL to use ML/AI expertise alongside strong programming and software engineering skills to make machine learning models and analyses easier to use and access. The role supports local markets and group functions to obtain business value from machine learning, and focuses on designing ML systems, productionising prototypes, enabling robust data flows, and evolving Big Data capabilities through reusable assets and patterns.
Job Responsibility:
Design and develop machine learning systems and implementation patterns
Automate predictive model software, including model training
Productionise data science prototypes and develop machine learning applications aligned to data science requirements
Facilitate the flow of data between ML/AI models and the organisation's data systems
Enhance data pipelines to ensure data is clean, accurate, and optimised for machine learning models
Partner with the architecture team to evolve Big Data platform capabilities (reusable assets/patterns) and components to meet business requirements and objectives
Research, investigate, and evaluate new technologies and methods to improve delivery and sustainability of machine learning applications and services
Contribute to defining best practice for agile development of applications running on the Big Data platform
Requirements:
Experience managing the development lifecycle for agile software development projects (Kanban or Scrum exposure)
Robust data modelling and data architecture skills
Knowledge of Big Data frameworks such as Hadoop, Spark, Hive, Yarn, and Airflow
Experience with distributed ML frameworks (for example H2O and/or TensorFlow) and various ML libraries
Proven experience creating and deploying end-to-end ML pipelines in production, including MLOps
Programming experience in Java and Python
Experience with containerisation (Docker/Kubernetes) or cloud alternatives is advantageous
Mandatory working experience with AWS services including: SageMaker Pipelines, SageMaker Studio, CloudFormation, CloudTrail, SNS, EventBridge, CodePipeline, CodeBuild, CodeCommit
Experience with other distributed technologies, NoSQL databases, and streaming technologies is desirable
Strong written and verbal communication, with excellent interpersonal and collaboration skills
3-year IT or IS degree or diploma (or related field) is essential
an advanced degree in Computer Science/Math/Statistics (or related discipline) is an advantage
Relevant cloud certification at professional or associate level
5+ years of relevant experience as an AI/ML Engineer
5+ years of BI or related software development experience
Nice to have:
Experience with other distributed technologies, NoSQL databases, and streaming technologies
What we offer:
Opportunities to deliver business value by enabling scalable ML capabilities for local markets and group functions
The chance to work on production-grade ML systems, end-to-end pipelines, and MLOps practices on AWS
Collaboration with architecture teams to shape reusable Big Data platform assets and engineering patterns
Scope to explore and evaluate new technologies and methods to improve sustainability and delivery of ML applications and services
Strengthening best practices for productionising data science prototypes into reliable ML applications
Deepening hands-on expertise in AWS-native MLOps and pipeline automation (including SageMaker and AWS developer tools)
Enhancing Big Data platform design skills through reusable patterns, components, and data pipeline optimisation
Improving cost and resource efficiency approaches across compute, network usage, and platform objectives