This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a Data Engineer to join a team focused on building reliable, scalable data solutions. In this role, you will create and enhance cloud-based data pipelines, organize data for analytics, and help ensure that business teams have access to trusted information. This position also partners closely with technical and non-technical stakeholders to turn reporting and data needs into practical engineering outcomes.
Job Responsibility:
Create and support scalable data ingestion and transformation workflows using Azure Data Factory, Databricks, and PySpark
Connect and consolidate data from enterprise platforms, operational databases, telematics feeds, APIs, and other internal or external sources
Structure and manage data within Azure Data Lake and lakehouse environments to support performance, accessibility, and long-term maintainability
Design curated datasets, data models, and schemas that improve usability for analytics, business intelligence, and downstream reporting
Apply governance and lineage practices through Unity Catalog while promoting strong data quality, consistency, and security standards
Work with business stakeholders and cross-functional teams to gather requirements, define technical specifications, and deliver data solutions aligned with operational needs
Improve pipeline stability and efficiency by troubleshooting failures, resolving performance issues, and refining storage and query strategies
Support Power BI reporting by preparing datasets, assisting with model improvements, and helping maintain reporting standards and governance practices
Use GitHub-based development practices for version control, peer review, CI/CD, and disciplined deployment processes
Mentor less-experienced engineers and contribute to a collaborative environment focused on continuous improvement and dependable delivery
Requirements:
Hands-on experience with Azure Data Factory, Azure Databricks, and Azure Data Lake in a data engineering environment
Strong programming ability in Python and PySpark for large-scale data processing and transformation
Proficiency in SQL, including writing and optimizing queries for analytics and data integration workloads
Experience building and maintaining ETL or ELT pipelines that combine data from multiple structured and semi-structured sources
Familiarity with data modeling concepts, curated dataset design, and preparation of data for BI or analytics consumption
Understanding of data governance, lineage, and security practices within modern cloud data platforms
Experience using GitHub or similar tools for source control, code review, and deployment automation
Ability to communicate effectively with business partners and translate functional needs into scalable technical solutions
What we offer:
Medical, vision, dental, and life and disability insurance