This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Consulting Data Engineer, you’ll design, build, and deploy scalable data pipeline and machine learning solutions that deliver real business value. You’ll use strong SQL and modern data stacks to create reliable, cost effective pipelines and support ML workloads. You will work on meaningful technical challenges with autonomy and communicate technical outcomes clearly to both technical and non-technical stakeholders. We value engineers who think creatively, communicate effectively, and engage confidently with stakeholders. We’re looking for engineers who do more than write code. You’ll listen to client challenges, dig into the core problem, help shape solutions, and explain them clearly. If you want to build something from the ground up with a team that’s already proven it can deliver meaningful outcomes, we’d like to hear from you.
Job Responsibility:
Build and maintain scalable data pipelines for ingesting, transforming, and delivering data
Manage and optimise databases, warehouses, and cloud storage solutions
Implement data quality frameworks and testing processes to ensure reliable systems
Design and deliver cloud-based solutions (AWS, Azure, or GCP)
Take technical ownership of project components and lead small development teams
Engage directly with clients, translating business requirements into technical solutions
Champion best practices including version control, CI/CD, and infrastructure as code
Requirements:
Hands-on data engineering experience in production environments
Strong proficiency in Python and SQL
Experience with at least one additional language (e.g. Java, Typescript/Javascript)
Experience with modern frameworks such as Apache Spark, Airflow, dbt, Kafka, or Flink
Background in building ML pipelines, MLOps practices, or feature stores is highly valued
Proven expertise in relational databases, data modelling, and query optimisation
Demonstrated ability to solve complex technical problems independently
Excellent communication skills with ability to engage clients and stakeholders
Degree in Computer Science, Engineering, Data Science, Mathematics, or a related field
Nice to have:
Experience with at least one of Databricks, Snowflake, BigQuery, MS Fabric is required, more than one a bonus
Experience with tools like Spark, dbt, Dataform is a plus but not required
What we offer:
Work with SAP’s latest technologies on cloud as S/4HANA, BTP and Joule, plus Databricks, ML/AI tools and cloud platforms
A flexible and supportive work environment including work from home
Competitive remuneration and benefits including novated lease, birthday leave, salary packaging, wellbeing programme, additional purchased leave, and company-provided laptop
Comprehensive training budget and paid certifications (Databricks, SAP, cloud platforms, Snowflake, BigQuery)
Structured career advancement pathways with mentoring from senior engineers
Exposure to diverse industries and client environments