This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a highly skilled AWS Data Engineer to enhance our data and analytics environment. The successful candidate will play a pivotal role in designing, developing, and managing data solutions that leverage cloud technologies and big data frameworks. This role will involve creating efficient data pipelines, optimizing data storage solutions, and implementing robust data processing workflows to ensure high-quality data availability for analytics and business intelligence.
Job Responsibility:
Build and maintain large-scale ETL pipelines using AWS Glue, Lambda, and Step Functions
Design and manage data lakes on Amazon S3, implementing robust schema management and lifecycle policies
Work with Apache Iceberg and Parquet formats to support efficient and scalable data storage
Develop distributed data processing workflows using PySpark
Implement secure, governed data environments using AWS Lake Formation
Build and maintain integrations using AWS API Gateway and data exchange APIs
Automate infrastructure provisioning using Terraform or CDK for Terraform (CDKTF)
Develop CI/CD pipelines and containerized solutions within modern DevOps practices
Implement logging, observability, and monitoring solutions to maintain reliable data workflows
Perform root cause analysis and optimize data processing for improved performance and quality
Collaborate with business intelligence teams and analysts to support reporting and analytics needs
Work in cross-functional, Agile teams and actively participate in sprint ceremonies, backlog refinement, and planning
Provide data-driven insights and recommendations that support business decision-making
Requirements:
Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience)
Minimum 3–5 years of experience in a Data Engineering role
Strong knowledge of AWS services: Glue, Lambda, S3, Athena, Lake Formation, Step Functions, DynamoDB
Proficiency in Python and PySpark for data processing, optimization, and automation
Hands-on experience with Terraform or CDKTF for Infrastructure as Code
Solid understanding of ETL development, data lakes, schema evolution, and distributed processing
Experience with CI/CD pipelines, automation, and containerization
Familiarity with API Gateway and modern integration patterns
Strong analytical and problem-solving skills
Experience working in Agile Scrum environments
Good understanding of data governance, security, and access control principles
Excellent command of both spoken and written English
Nice to have:
Experience working with Apache Iceberg and Parquet formats
Experience with visualization/BI tools such as Power BI or AWS QuickSight
Experience designing data products, implementing tag-based access control, or applying federated governance using AWS Lake Formation
Familiarity with Amazon SageMaker for AI/ML workflows
Hands-on experience with AWS QuickSight for building analytics dashboards
Exposure to data mesh architectures
Experience with container orchestration (e.g., Kubernetes, ECS, EKS)
Knowledge of modern data architecture patterns (e.g., CDC, event-driven pipelines, near-real-time ingestion)
What we offer:
Smooth integration and a supportive mentor
Choose from Remote, Hybrid or Office work opportunities
Projects have different working hours to suit your needs
Sponsored certifications, trainings and top e-learning platforms
Private Health Insurance
Individual coaching sessions or accredited Coaching School