This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
This role focuses on modernizing our data landscape. You will be responsible for transforming legacy codebases into clean, efficient, and well-tested systems. Working within a hybrid setup, you will leverage AWS and Spark to handle high-concurrency and high-performance data workloads.
Job Responsibility:
Design and build scalable data pipelines and assets
Implement Unit Testing/TDD to ensure code reliability
Requirements:
3+ years of general data engineering is required
4+ years of hands-on Python/PySpark experience is essential