This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We're looking for an engineer to support hands-on implementation and migration work as we evolve our data processing stack. This is a high impact and execution-focused engagement — you'll be contributing to company wide migration efforts and platform development. What You'll Work On You'll be embedded in a team in Data Infrastructure PA, contributing to hands-on engineering work. This includes large-scale pipeline migrations — validating performance and cost outcomes and helping move workloads to our evolving stack — as well as contributing to platform development across our Flink platform, Lakehouse architecture and beyond, as our priorities evolve. What We're Looking For You have solid, hands-on experience in backend engineering and are comfortable jumping into an existing platform codebase and making meaningful contributions quickly.
Job Responsibility:
Support hands-on implementation and migration work as we evolve our data processing stack
Contribute to company wide migration efforts and platform development
Embedded in a team in Data Infrastructure PA, contributing to hands-on engineering work including large-scale pipeline migrations — validating performance and cost outcomes and helping move workloads to our evolving stack
Contribute to platform development across our Flink platform, Lakehouse architecture and beyond
Requirements:
Strong Java development skills, with experience in data platform or data engineering contexts
Practical experience with at least one JVM-based data processing framework — Flink experience is a plus
Beam, Dataflow, or Spark also relevant
Comfortable with SQL and cloud data analytics platforms, particularly BigQuery
DevOps is part of your day-to-day: you work with cloud infrastructure, containerised applications, and are familiar with Kubernetes basics
Experience working with data engineering pipelines in Scala and/or Python
You write quality code and understand what it means to ship reliably in a production environment
You can work autonomously in an ambiguous environment and move quickly without waiting to be directed
Nice to have:
Prior experience with large-scale pipeline migrations
Familiarity with cost optimisation in cloud data processing workloads
What we offer:
Access to benefits package at attractive prices (medical care, Multisport card, life insurance, cafeteria system)