This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Senior Data Engineer will play a crucial role in migrating to a modern data architecture using Azure Databricks and Spark. This position requires strong skills in Python and SQL, along with experience in data modeling and CI/CD practices. The ideal candidate will have a bachelor's degree in Computer Science and at least 5 years of relevant experience. This role offers flexible working arrangements and opportunities for professional development.
Job Responsibility:
Design, develop, and optimize data pipelines using Azure Databricks, Spark, and related Azure services
Lead and support migration activities from the existing data warehouse/ETL stack into Databricks and modern data lakehouse architectures
Collaborate with architects, data engineers, and analysts to define migration approaches, integration patterns, and technical standards
Build and maintain data ingestion, transformation, and orchestration workflows aligned with best practices for Databricks and Delta Lake
Improve performance, scalability, and reliability of Spark workloads through tuning, optimization, and efficient resource management
Implement data quality, monitoring, and observability components for migrated pipelines
Contribute to platform governance, reusable components, and engineering standards to enable consistent delivery across teams
Document migration procedures, architectural decisions, data models, and operating guidelines
Participate in Agile ceremonies to ensure predictable and transparent delivery during the migration program
Requirements:
Strong hands-on experience with Azure Databricks and Spark (batch and/or streaming)
Demonstrated experience migrating legacy pipelines or data warehouses into Databricks or similar cloud architectures
Proficiency in Python and SQL for building and optimizing data transformations
Knowledge of Delta Lake, Lakehouse principles, and scalable data modeling approaches
Familiarity with CI/CD pipelines and DevOps practices (e.g., Azure DevOps) for data engineering workflows
Understanding of performance tuning, cluster configuration, and efficient resource usage in Spark environments
Ability to translate business data requirements into robust engineering solutions
Experience working in Agile delivery environments
Excellent command of spoken and written English
Bachelor's degree in Computer Science
At least 5 years of relevant experience
Nice to have:
Experience supporting ML or GenAI workloads in Databricks
Exposure to DataOps or MLOps concepts
What we offer:
Smooth integration and a supportive mentor
Choose from Remote, Hybrid or Office work opportunities
Projects have different working hours to suit your needs
Sponsored certifications, trainings and top e-learning platforms
Private Health Insurance
Individual coaching sessions or accredited Coaching School