This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a visionary Azure Data Engineer to join an ambitious, cloud-first team dedicated to a massive Data & AI transformation. This is not just a maintenance role; it is an invitation to help migrate legacy systems into a cutting-edge Azure Databricks platform, shaping the way data-driven decisions are made at every level of the organization.
Job Responsibility:
Build and maintain high-performance ETL pipelines using Medallion Architecture, ensuring data moves seamlessly from source to insight
Take the lead in transitioning legacy workflows (SQL Server, ADF, SSRS/SSAS) into a sophisticated Databricks environment
Enhance platform capabilities by optimizing ingestion, transformation, and publishing processes
Champion 'DataOps' by identifying opportunities for process improvement and increasing overall platform efficiency
Act as a bridge between analysts and architects, ensuring all data products are optimized for observability, quality, and performance
Requirements:
Proven professional experience with Azure Databricks, Azure Data Factory, and Azure SQL
Deep understanding of software engineering principles, including Git, CI/CD, and automated testing
Strong knowledge of Data Vault or Kimball techniques to design robust, business-aligned models
Proficiency in SQL, Python, and YAML is essential
A 'you build it, you own it' attitude, characterized by a proactive and entrepreneurial approach to problem-solving
Nice to have:
Experience with Infrastructure as Code (Terraform) and tools like dbt, SQLMesh, or Kafka
Relevant certifications such as DP-203 or Databricks Data Engineer Associate
A solid grasp of Data Governance, security, and platform engineering principles
The ability to self-organize and align personal objectives with the overarching business strategy