This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Join our team and start a new adventure in an international and dynamic environment, where you will be able to fulfill your career expectations in a fast-growing organization. As a Data Engineer under the VIE program, you will be positioned at the heart of our data ecosystem, contributing to both technical and cross-functional projects. You will design, develop, and maintain scalable data pipelines, ETL/ELT workflows, and data warehousing solutions, converting raw data into reliable, high-quality datasets that support analytics and business intelligence.
Job Responsibility:
Design and implement efficient ETL/ELT pipelines to process large volumes of data from various sources
Build and maintain data warehousing solutions and data lakes to support analytics and reporting needs
Optimize data pipelines for speed, reliability, and cost-effectiveness
Work closely with cross-functional teams to integrate data systems and ensure high-quality, reliable data
Implement monitoring tools and processes to ensure continuous pipeline operation and proactively address issues
Collaborate with BI and analytics teams to deliver actionable insights and reporting capabilities
Participate in projects implementing cloud-based data platforms (AWS, Azure, or GCP) and Databricks environments
Ensure adherence to best practices in data governance, security, and architecture
Requirements:
Master’s degree in Computer Science, Information Systems, Data Engineering, or a related field
Minimum 2–3 years of experience in data engineering, ETL development, or a similar role
Professional fluency in English and French
Proficiency in SQL and programming languages such as Python or Scala
Experience with cloud-based data platforms (AWS, Azure, or GCP)
Experience with modern data warehousing solutions, data lakes, and Databricks
Experience with big data frameworks (Apache Spark, Hadoop)
Familiarity with containerization and orchestration tools (Docker, Kubernetes)
You demonstrate excellent problem-solving, analytical, and critical thinking skills, with a focus on data quality and performance optimization
You demonstrate strong communication and presentation skills, able to explain technical concepts to non-technical stakeholders
You demonstrate a proven ability to work collaboratively in a multicultural and dynamic team environment
Nice to have:
Experience with generative AI tools to support data engineering productivity and automation
What we offer:
An international learning environment
Extensive training and certifications
A R&D laboratory where you can develop your skills on innovative projects
The opportunity to bring new ideas to develop a thriving business
Individual coaching and mentoring, as well as the chance to learn from experts for your professional and personal growth
An attractive, tailor-made, and evolving career path