This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
This role involves designing, building, and optimizing data ingestion, transformation, and delivery pipelines that support enterprise analytics, reporting, and operational data needs.
Job Responsibility:
Designing, building, and optimizing data ingestion, transformation, and delivery pipelines that support enterprise analytics, reporting, and operational data needs
Requirements:
3+ years of professional data engineering experience
Strong hands‑on expertise with: Azure Databricks (Spark/PySpark), Azure Data Factory (pipelines, data flows, orchestration), Azure Data Lake Storage, SQL and Python/PySpark scripting
Experience building scalable, reliable ETL/ELT solutions in cloud environments
Familiarity with CI/CD, version control, and DevOps workflows for data solutions