This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking an experienced Senior Data Engineer with strong expertise in AWS Redshift and AWS Glue to design, develop, and maintain large-scale data solutions. The ideal candidate will have deep hands-on experience across the AWS data ecosystem and will play a critical role in building scalable, reliable, and high-performance data pipelines, with a strong ability to understand business requirements and translate them into effective data models and SQL solutions.
Job Responsibility:
Design, develop, and maintain end-to-end ETL/ELT pipelines using AWS Glue, Redshift, and related AWS services
Develop and maintain robust data models by understanding business requirements and analytics needs
Write and optimize complex SQL queries to support reporting, analytics, and downstream data consumers
Optimize data warehouse schemas, data models, and queries for performance, scalability, and cost efficiency
Collaborate with data architects, analysts, and business stakeholders to translate business needs into technical data solutions
Implement data quality, governance, and security best practices across data pipelines
Automate workflows, monitoring, alerting, and performance tuning in AWS environments
Manage data ingestion from multiple structured and unstructured data sources
Troubleshoot and resolve production data issues, ensuring high availability and reliability
Requirements:
Strong hands-on experience with AWS Redshift, including performance tuning, schema design, and data warehouse optimization
Expertise in AWS Glue (Glue Studio, Glue Catalog) and PySpark for ETL development
Solid expertise in data modeling, with the ability to understand business processes and translate them into efficient data models
Strong proficiency in SQL, with proven ability to write complex, high-performance queries
Solid understanding of data warehousing concepts, ETL design patterns, and data modeling techniques (Star / Snowflake schemas)
Experience with AWS services such as S3, Lambda, Athena, Step Functions, and CloudWatch
Proficiency in Python for data transformation and automation
Excellent problem-solving, analytical, and communication skills
10+ Years of experience
Qualification: Any Degree
Nice to have:
Experience with CI/CD pipelines, Git version control, and Infrastructure as Code (CloudFormation / Terraform) is a plus