This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Strong understanding of data warehousing, data lakes, and best practices for data quality, security, and performance
Design, develop, deploy, and maintain scalable data pipelines and ETL/ELT processes using Azure Data Factory (ADF), Azure Databricks, and other Azure data services (e.g., Azure Data Lake Storage, Azure Synapse Analytics)
Implement efficient, reusable, and testable code for data transformation, cleansing, and analysis using Python and PySpark within Databricks notebooks and jobs
Use ADF pipelines to orchestrate data movement and trigger Databricks notebooks or jobs, ensuring seamless integration between various data sources and destinations
Manage data storage, ensure data integrity, optimize data processing, and troubleshoot performance issues related to Databricks and ADF solutions
Collaborate closely with data scientists, data analysts, and business stakeholders to gather requirements and translate business needs into technical specifications and data solutions
Implement monitoring, error handling, and security best practices across all data workflows and ensure compliance with data governance standards
Create and maintain comprehensive technical documentation for data architectures, pipelines, and processes
Proven experience in data engineering, data warehousing, or similar