This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a bright and exceptional Data Engineer to join our Technology team. In this role, you will be part of our Data Engineering team, working with product managers and subject matter experts to design, build, and optimize scalable data pipelines for our Data Platform. The ideal candidate has strong hands-on experience in distributed data processing and operational skills to drive efficiency and speed.
Job Responsibility:
Build and support connectors and data pipelines for data ingestion and processing on the in-house data platform to meet business requirements
Demonstrate strong SQL skills to write complex transformations
Demonstrate deep knowledge in Python and PySpark to code ETL blocks for the data pipelines
Troubleshoot and optimize ETL pipelines to minimize execution overheads
Drive improvements in performance, reliability, and scalability of data pipelines
Mentor other team members
Requirements:
Bachelor's degree in computer science, math, or other technical fields
2-4 years of relevant experience
Experience building pipelines and ingesting data from APIs, SFTP, Databases, etc.
Strong proficiency in Python and PySpark for data engineering tasks
Ability to write complex queries for data transformations and analysis
Basic understanding of data modeling and data warehousing
Solid understanding of CI/CD, version control, and DevOps practices
Excellent problem-solving and troubleshooting skills
Proven track record for working in an agile and collaborative environment
Nice to have:
Fintech domain experience is a plus
Experience with the AWS stack, especially with AWS Glue, Athena, and AWS Lambda, is a plus
Familiarity with Airflow, ADF, Glue Workflow, etc.
Demonstrated contribution to end-to-end delivery of enterprise-grade software