This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a highly skilled Data Engineer with strong expertise in AWS, Python, PySpark, and SQL to join our growing technology team. The ideal candidate will design, build, and optimize scalable data pipelines and infrastructure, enabling seamless data integration, transformation, and analytics across the organization.
Job Responsibility:
Design, develop, and maintain robust ETL pipelines using PySpark, Python, and SQL
Implement and manage data solutions on AWS (S3, Glue, EMR, Redshift, Lambda, etc.)
Collaborate with data scientists, analysts, and business stakeholders to deliver reliable datasets for reporting and advanced analytics
Optimize data workflows for performance, scalability, and cost efficiency
Ensure data quality, governance, and compliance across all systems
Monitor, troubleshoot, and improve existing data pipelines and infrastructure
Document technical processes and contribute to best practices in data engineering
Requirements:
Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field
Proven experience as a Data Engineer (5+ years preferred)
Strong proficiency in Python and PySpark for data processing
Advanced knowledge of SQL for querying, optimization, and data modeling