This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a highly skilled and motivated Data Engineer to join our growing data and analytics team. The ideal candidate will have strong experience designing and developing scalable data pipelines, integrating complex systems, and optimizing data workflows. Proficiency in Databricks and SAP Datasphere is preferred, as these platforms are central to our data ecosystem. However, demonstration of aptitude to quickly adapt to new technologies from past experience is also highly valued. This role will play a critical part in ensuring the accessibility, reliability, and performance of our enterprise data infrastructure to enable impactful business intelligence and data science initiatives.
Job Responsibility:
Design, build, and maintain robust, scalable, and high-performance data pipelines using Databricks and SAP Datasphere
Collaborate with data architects, analysts, data scientists, and business stakeholders to gather requirements and deliver data solutions aligned with stakeholders’ goals
Integrate diverse data sources (e.g., SAP, APIs, flat files, cloud storage) into the enterprise data platforms
Ensure high standards of data quality and implement data governance practices
Stay current with emerging trends and technologies in cloud computing, big data, and data engineering
Provide ongoing support for the platform, troubleshoot any issues that arise, and ensure high availability and reliability of data infrastructure
Create documentation for the platform infrastructure and processes, and train other team members or users in platform effectively
Requirements:
Bachelor's or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field
5-7+ years of experience in a data engineering or related role
Strong knowledge of data engineering principles, data warehousing concepts, and modern data architecture
Proficiency in SQL and at least one programming language (e.g., Python, Scala)
Experience with cloud platforms (e.g., Azure, AWS, or GCP), particularly in data services
Familiarity with data orchestration tools (e.g., PySpark, Airflow, Azure Data Factory) and CI/CD pipelines
Nice to have:
Hands-on experience with Databricks (including Spark/PySpark, Delta Lake, MLflow, Unity Catalog, etc.)
Practical experience working with SAP Datasphere (or SAP Data Warehouse Cloud) in data modeling and data integration scenarios
SAP BW or SAP HANA experience is a plus
Experience with BI tools like Power BI or Tableau
Understanding of data governance frameworks and data security best practices
Exposure to data lakehouse architecture and real-time streaming data pipelines
Certifications in Databricks, SAP, or cloud platforms are advantageous