This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Data Engineer role focuses on designing, building, and optimizing scalable data solutions that support diverse business needs. This position requires the ability to work independently while collaborating effectively in a fast-paced, agile environment. The individual in this role partners with cross-functional teams to gather data requirements, recommend enhancements to existing data pipelines and architectures, and ensure the reliability, performance, and efficiency of data processes.
Job Responsibility:
Support the team’s adoption and continued evolution of the Databricks platform, leveraging features such as Delta Live Tables, workflows, and related tooling
Design, develop, and maintain data pipelines that extract data from relational sources, load it into a data lake, transform it as needed, and publish it to a Databricks-based lakehouse environment
Optimize data pipelines and processing workflows to improve performance, scalability, and overall efficiency
Implement data quality checks and validation logic to ensure data accuracy, consistency, and completeness
Create and maintain documentation including data mappings, data definitions, architectural diagrams, and data flow diagrams
Develop proof-of-concepts to evaluate and validate new technologies, tools, or data processes
Deploy, manage, and support code across non-production and production environments
Investigate, troubleshoot, and resolve data-related issues, including identifying root causes and implementing fixes
Identify performance bottlenecks and recommend optimization strategies, including database tuning and query performance improvements
Requirements:
Bachelor’s degree in Computer Science, Data Science, Software Engineering, Information Systems, or a related quantitative discipline
4+ years of experience in data-focused roles such as Data Engineer, ETL Engineer, Data Architect, or similar
Active Databricks Data Engineer or Analyst certification
4+ years of hands-on experience developing in Python
3+ years of experience working with Databricks, including production implementations involving data structures, data storage, change data capture, pipeline optimization, and best practices
3+ years of experience with Kimball dimensional modeling, including star schemas, fact tables, Type 1 and Type 2 dimensions, aggregates, and a strong understanding of ELT/ETL methodologies
3+ years of experience writing complex SQL and PL/SQL
2+ years of experience using Airflow for workflow orchestration
3+ years of experience working with relational databases (Oracle or similar platforms)
2+ years of experience with NoSQL databases such as MongoDB, Cosmos DB, DocumentDB, or comparable technologies
2+ years of cloud platform experience (Azure or equivalent)
Experience implementing CI/CD practices using Git-based workflows and DevOps tools
Familiarity with modern data storage formats including Parquet, Arrow, and Avro