This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
This role is to support and enhance enterprise business intelligence and analytics environments. This role focuses on designing, building, and maintaining scalable data pipelines and cloud‑based data platforms using AWS services. The ideal candidate brings deep hands‑on experience with AWS Glue, PySpark, Redshift, and serverless architectures, along with strong SQL and data analysis skills. This role will collaborate closely with architecture, security, compliance, and development teams to ensure data solutions are performant, secure, and compliant with regulatory requirements.
Job Responsibility:
Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue with PySpark for large‑scale data processing
Develop and support serverless integrations using AWS Lambda for event‑driven workflows and system integrations
Design and optimize Amazon Redshift data warehouse solutions, including: Advanced SQL analytics, Stored procedures, Performance tuning
Lead implementation of secure vendor file transfer and ingestion solutions using AWS Transfer Family
Design and implement database migration and replication pipelines using AWS Database Migration Service (DMS)
Build and manage workflow orchestration using Apache Airflow or similar orchestration tools
Analyze data quality, transformation logic, and pipeline performance using SQL and data analysis techniques
Troubleshoot and resolve production data pipeline and integration issues across AWS services
Provide technical guidance to development team members on: AWS best practices, Cost optimization, Performance optimization
Partner with enterprise architecture, security, and compliance teams to ensure SOX and regulatory compliance
Requirements:
Bachelor’s degree in Computer Science, Information Technology, Data Engineering, or equivalent practical experience
7+ years of experience building enterprise data platforms or data engineering solutions
5+ years of hands‑on experience with AWS cloud services
Strong hands‑on experience with AWS Glue and PySpark for ETL processing
Experience developing serverless applications using AWS Lambda
Deep expertise with Amazon Redshift, including performance tuning and advanced SQL
Experience with workflow orchestration tools such as Apache Airflow
Experience implementing secure vendor integrations using AWS Transfer Family
Experience designing and supporting data migration and replication pipelines using AWS DMS
Advanced SQL skills and experience analyzing complex datasets
Knowledge of AWS security, encryption, IAM policies, and compliance considerations
Strong troubleshooting, problem‑solving, and analytical skills
Nice to have:
Experience working in financial services or other regulated environments
Experience supporting enterprise BI or analytics platforms
Strong communication skills and ability to work cross‑functionally with technical and non‑technical teams
What we offer:
medical, vision, dental, and life and disability insurance