This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Senior Data Migration Engineer, you will be a core execution member of the dedicated migration squad. You will work hands-on across the full migration lifecycle – from extraction and mapping through conversion, validation, and cutover – while also actively building the tooling and accelerators that make future migrations faster and safer (e.g.: artifact observability, version-controlled mapping pipelines, automated validation checks, and reusable migration templates). This is not a maintenance role: you will shape the process, own the tooling, and directly influence delivery velocity.
Job Responsibility:
Work hands-on across the full migration lifecycle – from extraction and mapping through conversion, validation, and cutover
Actively building the tooling and accelerators that make future migrations faster and safer (e.g.: artifact observability, version-controlled mapping pipelines, automated validation checks, and reusable migration templates)
Shape the process, own the tooling, and directly influence delivery velocity
Mentor and technically lead parallel migration teams
Requirements:
Proven experience with large-scale data migration projects: ETL or systems integration on enterprise-scale projects, transformation pipelines, cutover planning
Experience working with XML-based data transformation (mapping files, XSLT, or equivalent config-driven ETL)
Proven ability to diagnose complex migration errors under time pressure: root-cause analysis, staging data fixes, re-run coordination
Experience coordinating with business stakeholders during UAT and cutover
clear written and verbal communication in English
Experience with legacy database analysis: reverse-engineering un(der)documented schemas, understanding data semantics and relationships without complete documentation and dealing with corrupted or ambiguous data or metadata
Comfortable working with SQL at an advanced level: complex queries, schema analysis, data profiling, and diagnosing data quality issues
Strong proficiency in SQL across at least two of: PostgreSQL, Oracle, SQL Server, DB2
Solid Java skills (Java 21 ecosystem preferred)
comfortable building and extending internal tooling – data pipelines, automation scripts, validation frameworks – with clean, testable code
Familiarity with Git workflows, CI/CD pipelines, and infrastructure-as-code practices
Good exposure to cloud environments, ideally AWS infrastructure and services
Docker: confident setup, troubleshooting, and local environment management
Rapid domain understanding: ability to quickly absorb unfamiliar, regulated business domains