This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a Senior Software Engineer who is hungry to learn continuously and take ownership of the tools and products that they work on in accordance with best practices for Agile Software Development. We believe that a good engineer can move from one project to another and can learn new skills effortlessly if they are motivated by what they do. Don't let the language or framework you know now be a barrier to applying for the role you would like! We will lean on your technical expertise and your pragmatic approach to problem solving; working in a team that prioritizes the principles of Agile delivery and continuous improvement. You will have a Data-driven, evidence-based mentality, comfortable with the principles of continuous experimentation and validation.
Job Responsibility:
Design, develop, and maintain scalable and robust data pipelines for extraction, transformation, and loading (ETL) of large datasets
Optimize data processing jobs and database queries for maximum performance and efficiency
Implement and manage data storage solutions, including data warehouses, data lakes, and other distributed systems
Write clean, maintainable, and efficient code for data ingestion, processing, and analysis
Troubleshoot and debug data pipeline issues, ensuring data quality and integrity
Stay up-to-date with emerging data engineering technologies, tools, and best practices
Mentor and support other team members in data engineering principles and practices
Requirements:
Proficiency in Java, Python, Kotlin, or Scala
Experience with big data technologies such as Spark, Hadoop, and Kafka
Familiarity with workflow orchestration tools like Apache Airflow
Experience with streaming data platforms like Kafka, Pulsar, Flink
Experience with building data pipelines and assets, deployed on AWS services such as EMR, EMR Serverless, and AWS Glue
Experience with relational and NoSQL databases
Familiarity with version control systems (e.g., Git)
Excellent problem-solving skills and attention to detail
Ability to work independently and as part of a team
Strong communication skills
Nice to have:
Experience working with big data technologies (e.g., Spark, Hadoop, Flink)
Knowledge of modern data pipelines and orchestration tools (e.g., Airflow, Dagster)
Familiarity with Agile development methodologies
Experience with data quality and testing frameworks