This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking an experienced Senior Data Engineer with strong expertise in SAP data extraction, transformation, and migration into Databricks. The ideal candidate will have hands-on experience in building scalable data pipelines, optimizing data workflows, and delivering end-to-end migration from SAP (ECC/S/4HANA/BW) to modern cloud platforms using Databricks.
Job Responsibility:
Lead and execute end-to-end SAP to Databricks migration projects
Extract data from SAP systems (ECC, S/4HANA, SAP BW, SAP HANA) using tools such as SAP ODP, SAP SLT, BAPIs, RFCs, CDS Views, SAP Data Services, or ABAP programs
Design, develop, and maintain scalable ETL/ELT pipelines in Databricks using PySpark/Spark SQL
Build and optimize data models using Delta Lake architecture
Perform data cleansing, validation, and transformation to ensure quality and consistency
Work with cross-functional teams (SAP Functional, Architects, BI/Analytics stakeholders) to understand data requirements and solution design
Monitor, troubleshoot, and improve data pipeline performance
Ensure adherence to best practices for data security, governance, and compliance
Create technical documentation and support knowledge transition
Requirements:
5–8 years of experience as a Data Engineer or similar role
Hands-on experience in SAP data extraction using OData, ODP, SLT, SAP Data Services, CDS Views, or ABAP
Strong expertise in Databricks, Delta Lake, Notebooks, Databricks Workflows, and Spark clusters
Proficiency in PySpark, Spark SQL, SQL, and ETL/ELT frameworks
Experience with Azure/AWS/GCP Databricks environments (any cloud is fine)
Strong understanding of data modeling, data warehousing, and lakehouse concepts
Experience in SAP to cloud migration or large-scale data migration projects
Knowledge of version control and CI/CD tools (Git, Azure DevOps, GitHub Actions, etc.)
Familiarity with data governance tools (Unity Catalog is a plus)
Nice to have:
Experience with Azure Data Factory / AWS Glue / GCP DataFusion
Knowledge of SAP BW extractors and replication techniques
Exposure to Databricks SQL, dashboards, or analytics
Experience with performance tuning of Spark jobs and pipelines
Certification in Databricks, Azure/AWS, or SAP data integration