This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Data Engineer is responsible for designing, building, and operating reliable data pipelines on the Azure data platform. This role works closely with Tech and Business Intelligence teams to deliver high-quality, trusted datasets through automated data ingestion, cleansing, and validation processes. The position is fixed-term contract.
Job Responsibility:
Design, build, and maintain a Medallion Architecture (Bronze/Silver/Gold) on Azure Databricks to deliver high-quality, business-ready datasets
Develop and operate ETL/ELT workflows with scheduling, monitoring, and rerun strategies
Implement robust data cleansing and transformation logic using SQL and Python/PySpark
Automate data validation and data quality checks embedded within pipelines
Troubleshoot and resolve production data pipeline issues under time pressure
Collaborate closely with Tech and BI teams to translate business requirements into data solutions
Ensure curated datasets are optimized for analytics, reporting, and automation use cases
Support integrations with Power Platform where data pipelines feed business workflows
Document pipeline designs, validation logic, and operational procedures
Continuously improve data reliability, performance, and operational efficiency
Requirements:
Bachelor’s degree in Computer Science, Engineering, Data, or a related field (or equivalent experience)
5–7 years of hands-on experience in Data Engineering or related roles
Strong SQL skills including joins, aggregations, and reconciliation logic
Strong experience with Python and PySpark for data processing
Hands-on experience with Databricks, Spark, and Delta Lake in production environments
Experience with Azure data services such as Azure SQL, Azure databases, and Azure Blob Storage / Data Lake
Proven ability to operate and support production pipelines end to end
Proven experiencing implementing Medallion Architecture and managing data quality within Lakehouse environment
In-depth understanding of Databricks Unity Catalog, including catalog/schema management and migration from Hive Metastore
Nice to have:
Advanced degree or relevant professional certifications in data or cloud technologies
Ability to work autonomously with strong ownership and accountability
Comfortable working under pressure in fast-paced environments
Proficiency in English is a strong plus, especially for collaboration with global or regional stakeholders
Clear communication skills with both technical and non-technical audiences