This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Data Engineer will focus on developing scalable, high-performance data processing solutions and implementing robust ETL workflows. The role requires strong experience in Databricks and Azure, with a minimum of 3 years in data engineering. A BSc/MSc in Computer Science or related field is required. We are looking for a skilled Data Engineer with strong experience in Databricks and the Azure ecosystem to help build and optimize modern data pipelines for our client, one of the UK’s leading energy providers. The role focuses on developing scalable, high‑performance data processing solutions, implementing robust ETL workflows, and enabling advanced analytics across large‑scale operational and customer datasets. Your expertise will support client’s efforts to modernize its data landscape, enhance real‑time insights, and drive key initiatives in energy distribution, sustainability, and grid innovation within a highly regulated environment.
Job Responsibility:
Client Engagement & Delivery
Data Pipeline Development (Batch and Streaming)
Fabric and Azure Architectures
Data Modelling & Optimisation
Collaboration & Best Practices
Quality, Governance & Security
Client stakeholders up to Head of Data Engineering, Chief Data Architect, and Analytics leadership
Delivery of high-performing, scalable, and secure data pipelines aligned to client requirements
High client satisfaction and successful adoption of Fabric and Azure based solutions
Improve data engineering practices
Contribution to the growth of the practice through reusable assets, accelerators, and technical leadership
Requirements:
Minimum 3–8 years in data engineering, data warehousing, or data architecture roles
At least 3+ years working with Fabric
BSc/MSc in Computer Science, Data Engineering, or related field
Proven experience in data engineering and pipeline development on Fabric, Azure and cloud-native platforms
Familiarity with Fabric Workflows and other orchestration tools
Proficiency in ETL/ELT tools such as DBT, Matillion, Talend, or equivalent
Strong SQL and Python (or equivalent language) skills for data manipulation and automation
Exposure to AI/ML workloads desirable
Proficiency in cloud ecosystems (Specifically Azure, optionally AWS and GCP are an advantage) and infrastructure-as-code (e.g., Terraform)
Knowledge of data modelling methodologies (star schemas, Data Vault, Kimball, Inmon)
Familiarity with medallion architectures, data lakehouse principles and distributed data processing
Experience with version control tools (GitHub, Bitbucket) and CI/CD pipelines
Understanding of data governance, security, and compliance frameworks
Strong consulting values with ability to collaborate effectively in client-facing environments
Hands-on expertise across the data lifecycle: ingestion, transformation, modelling, governance, and consumption
Strong problem-solving, analytical, and communication skills
Experience leading or mentoring teams of engineers to deliver high-quality scalable data solutions
Fabric and Azure certifications highly desirable
Nice to have:
Exposure to AI/ML workloads desirable
Proficiency in cloud ecosystems (optionally AWS and GCP are an advantage)
Fabric and Azure certifications highly desirable
What we offer:
Smooth integration and a supportive mentor
Pick your working style: choose from Remote, Hybrid or Office work opportunities
Projects have different working hours to suit your needs
Sponsored certifications, trainings and top e-learning platforms
Private Health Insurance
Individual coaching sessions or joining our accredited Coaching School