This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for an experienced Cloud Data Engineer with strong expertise in building scalable data pipelines on AWS. The ideal candidate should have hands-on experience in ETL development, big data processing, workflow orchestration, and handling both structured and unstructured data. Strong communication and collaboration skills are essential.
Job Responsibility:
Design and develop scalable ETL pipelines using AWS cloud services
Build data ingestion frameworks for structured and unstructured (OCR-based) data
Develop and optimize SQL queries and data transformation logic
Orchestrate workflows using Airflow
Develop big data solutions using Python and PySpark
Build and maintain streaming data pipelines
Ensure data quality, performance, and reliability
Collaborate with cross-functional teams to gather requirements
Document architecture, workflows, and processes
Requirements:
Strong experience with AWS (ETL, data services)
Expertise with structured and unstructured (OCR) data
Knowledge of streaming data tools
Proficiency in SQL
Strong Python programming skills
Hands-on experience with PySpark
Experience with Airflow for workflow orchestration
Excellent communication skills
Nice to have:
Experience with AWS Glue, Lambda, S3, Redshift, EMR, Kinesis