This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a skilled Big Data ETL Developer to design, develop, and maintain large-scale data pipelines and ETL processes. The candidate will work with big data technologies to process, transform, and manage large volumes of structured and unstructured data.
Job Responsibility:
Design and develop ETL pipelines for big data processing
Work with big data technologies such as Hadoop, Spark, and Hive
Extract, transform, and load data from multiple sources into data platforms
Optimize data processing workflows and ETL jobs for performance and scalability
Collaborate with data engineers, analysts, and business teams to support data requirements
Ensure data quality, integrity, and governance
Troubleshoot ETL failures and performance issues
Document data pipelines and technical processes
Requirements:
Strong experience in Big Data and ETL development
Proficiency in SQL, Python, or Scala
Hands-on experience with Hadoop ecosystem tools (Spark, Hive, HDFS)
Experience with data pipeline orchestration tools
Knowledge of data warehousing and big data architecture
Strong analytical and problem-solving skills
Nice to have:
Experience with cloud data platforms such as AWS, Azure, or GCP
Familiarity with Kafka or real-time data streaming technologies
Experience with CI/CD pipelines and data engineering best practices