This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Design, develop, and optimize data pipelines using Azure Data Services (Azure Data Factory, Azure Data Lake Storage, Azure Synapse). Build and maintain scalable ETL/ELT workflows using Databricks (Spark, PySpark, Delta Lake). Implement and manage data orchestration and dependency management using Dagster or similar tools. Partner with analytics, data science, and product teams to ensure reliable, high-quality data availability. Optimize data models and storage strategies for performance, scalability, and cost efficiency. Ensure data quality, observability, and reliability through monitoring, logging, and automated validation. Support CI/CD pipelines and infrastructure-as-code practices for data platforms. Enforce data security, governance, and compliance best practices within Azure.
Job Responsibility:
Design, develop, and optimize data pipelines using Azure Data Services (Azure Data Factory, Azure Data Lake Storage, Azure Synapse)
Build and maintain scalable ETL/ELT workflows using Databricks (Spark, PySpark, Delta Lake)
Implement and manage data orchestration and dependency management using Dagster or similar tools
Partner with analytics, data science, and product teams to ensure reliable, high-quality data availability
Optimize data models and storage strategies for performance, scalability, and cost efficiency
Ensure data quality, observability, and reliability through monitoring, logging, and automated validation
Support CI/CD pipelines and infrastructure-as-code practices for data platforms
Enforce data security, governance, and compliance best practices within Azure
Requirements:
3+ years of experience in data engineering or analytics engineering
Strong hands-on experience with Databricks, including Spark-based data processing
Experience building data pipelines in Microsoft Azure
Proficiency with SQL and Python
Experience with modern data orchestration tools (e.g., Dagster, Airflow, Prefect)
Familiarity with data warehousing concepts, dimensional modeling, and ELT patterns
Experience working in Agile or DevOps-oriented environments
What we offer:
medical, vision, dental, and life and disability insurance