This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
This contract to hire role (one year contract) offers the opportunity to work in an Agile environment alongside cross-functional teams to design and optimize data architecture, drive innovation in data pipeline development, and support data-driven decision-making across the organization. The Senior Data Engineer plays a pivotal role in designing, building, and maintaining scalable and reliable data infrastructure. This position supports development teams, analysts, and data scientists by creating solutions that enable efficient data access and transformation. The role includes mentoring junior engineers, contributing to the company’s data architecture strategy, and ensuring compliance with best practices in cloud infrastructure and data governance.
Job Responsibility:
Design, build, and manage scalable data pipelines using AWS services such as Glue, Lambda, EC2, S3, Redshift, and Delta Lake
Develop and deploy infrastructure as code (IaC) using Terraform to automate and manage cloud-based data services
Collaborate with cross-functional teams to gather requirements and translate them into efficient data solutions
Implement and optimize data flow and architecture to support data analytics, reporting, and business intelligence efforts
Build analytics tools and solutions to drive business insights and decision-making
Participate in the continuous integration and continuous deployment (CI/CD) process
Maintain existing systems through regular updates, troubleshooting, and performance optimization
Contribute to Agile development cycles, including sprint planning, reviews, and retrospectives
Provide technical mentorship and participate in code reviews to maintain high-quality development standards
Support and troubleshoot issues in production systems, including off-hours support when required
Ensure compliance with security and privacy standards in all data handling and processing
Requirements:
5+ years of experience developing in Python and building scalable data pipelines
3+ years of hands-on experience with AWS data-focused services and infrastructure as code (IaC) practices
Expertise in PySpark, Terraform (including modules), and AWS services such as Glue, Lambda, RDS, Redshift, DynamoDB, Athena, and S3
Strong understanding of CI/CD practices and version control using GitHub or similar tools
Familiarity with microservices, stream processing, message queuing, and scalable storage solutions
Proven ability to extract, manipulate, and transform large data sets for business insights
Experience applying Agile methodologies (Scrum/Kanban, Test-Driven Development) in a collaborative setting
Strong communication and organizational skills, with the ability to mentor peers and articulate complex solutions
Bachelor’s degree in Computer Science, Information Systems, or a related field preferred
Nice to have:
AWS certifications (e.g., AWS Certified Data Analytics – Specialty) are a plus