This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Build, optimize, and maintain ETL/ELT pipelines using Azure Databricks (PySpark)
Implement data quality checks, monitoring, and observability processes
Work with structured and unstructured data from multiple sources
Design data models for analytics (Star Schema, Lakehouse architecture)
Develop Power BI datasets, semantic models, and dataflows
Work within the Azure ecosystem: Storage, SQL, Key Vault
Provide occasional support for Power Platform development (Power Apps, Power Automate) for minor requests, such as adjusting existing flows, fixing simple errors, or implementing small business-requested automations
Requirements:
Strong expertise in PySpark and Azure Databricks
Advanced SQL knowledge
Strong data modeling skills in Power BI
Minimum 3+ years in a data engineering role or similar position
Hands-on experience with Azure data tools: Databricks, ADF, Storage, Power BI
Bachelor’s degree in Computer Science, Engineering, Mathematics, or related fields, or equivalent practical experience
Nice to have:
Experience with Azure Data Factory
Familiarity with Power Platform components (Power Apps, Power Automate)
Experience with CI/CD pipelines
Familiarity with SharePoint Framework (React)
General Python knowledge and best practices outside the PySpark ecosystem
Understanding of governance tools such as Unity Catalog