This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Senior Data Engineer will play a crucial role in enhancing data quality and migrating data platforms to cloud-native ecosystems. With a focus on SQL, BigQuery, and Python, the candidate will design scalable data pipelines and analytical solutions. A minimum of 5 years of experience in data engineering and a university degree in computer science are required. Join us to make a significant impact in data management and analytics. You will join the OneMIS stream, responsible for management, regulatory & risk reporting, and advanced analytics. Our mission includes enhancing data quality via KPIs and migrating data platforms to modern, cloud-native ecosystems. We operate in an agile environment, committed to responsible data practices. We are looking for a Senior Data Engineer to design and deliver scalable data pipelines and high performance analytical solutions using SQL/BigQuery, Spark/PySpark, and Python on Google Cloud. This role focuses on building reliable, cloud native data products that enable advanced reporting, analytics, and decision making across the organization.
Job Responsibility:
Build scalable data pipelines: Design and deliver batch and real-time ETL/ELT pipelines across cloud environments to support analytics and reporting
Develop SQL and BigQuery solutions: Write and optimize advanced SQL transformations and build performant, cost‑efficient BigQuery data models
Develop Python workflows: Implement scalable data processing solutions using Python and PySpark, ensuring maintainable and high‑quality code
Design data models and ensure quality: Build robust data models and apply validation practices to maintain accuracy and reliability
Build cloud‑native data solutions: Use GCP services such as BigQuery, Dataflow, Cloud Composer, Pub/Sub, and GCS to build and operate modern data platforms
Optimize performance and reliability: Troubleshoot complex pipeline issues and continuously improve compute, storage, and processing performance
Collaborate using strong engineering practices: Work with engineering, analytics, and business teams while contributing to CI/CD, code reviews, and testing standards
Requirements:
University degree in computer science or a comparable qualification
At least 5 years of experience as a Data Engineer, building scalable data pipelines and working with cloud-based data ecosystems
Strong expertise in SQL and hands‑on experience building performant datasets in BigQuery (or similar cloud data warehouses)
Proven experience with Python and PySpark for scalable data processing in distributed environments
Solid understanding of data modeling, ELT/ETL patterns, and data quality best practices
Experience with Google Cloud Platform, particularly BigQuery, Dataflow, Cloud Composer, GCS, or equivalent cloud data services
Hands‑on experience building scalable data pipelines (batch and near real‑time) in a cloud‑native environment
Proficiency with version control, CI/CD pipelines, and automated testing frameworks
Ability to troubleshoot and optimize performance across compute, storage, and processing layers
Nice to have:
Experience with Infrastructure as Code (Terraform, Ansible, Chef)
Knowledge of shell scripting
Experience in financial services or regulated environments
What we offer:
Smooth integration and a supportive mentor
Pick your working style: choose from Remote, Hybrid or Office work opportunities
Projects have different working hours to suit your needs
Sponsored certifications, trainings and top e-learning platforms
Private Health Insurance
Individual coaching sessions or joining our accredited Coaching School