This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Data Engineer will play a crucial role in migrating data from on-prem DataLake to AWS LakeHouse. This position requires a minimum of 3-5 years of experience in data engineering, with strong skills in Python and SQL. The candidate will engage with stakeholders to ensure data integrity and will be responsible for translating legacy data patterns for compatibility with modern tools. A Bachelor’s or Master’s degree in a relevant field is required.
Job Responsibility:
Engineer will be part of the datastore-migration Factory team that will be responsible to perform for the end-to-end datastore migration from on-prem DataLake to AWS hosted LakeHouse
Pipeline Migration: Refactoring and migrating extraction logic and job scheduling from legacy frameworks to the new Lakehouse environment
Data Transfer: Executing the physical migration of underlying datasets while ensuring data integrity
Stakeholder Engagement: Acting as a technical liaison to internal clients, facilitating "handoff and sign-off" conversations with data owners to ensure migrated assets meet business requirements
Consumption Pattern Migration: Translating and optimizing legacy SQL and Spark-based consumption patterns (raw and modeled) for compatibility with Snowflake and Iceberg
Usage analysis: Understand usage patterns to deliver the required data products
Data Reconciliation & Quality: Work with reconciliation frameworks to build confidence that migrated data is functionally equivalent to that already used within production flows
Work with our other internal data management platform, and must have an aptitude for learning new workflows and language constructs as necessary
Requirements:
Bachelor’s or Master’s degree in Computer Science, Applied Mathematics, Engineering, or a related quantitative field
Minimum of 3-5 years of professional "hands-on-keyboard" coding experience in a collaborative, team-based environment
Ability to trouble shoot (SQL) and basic scripting experience
Professional proficiency in Python or Java
Deep familiarity with the full Software Development Life Cycle (SDLC) and CI/CD best practices & K8s deployment experience
Sophisticated understanding of Temporal Data Modeling, Schema Management, Performance Optimization, and Architectural Theory
Experience with technologies: Kafka, ANSI SQL, FTP, Apache Spark, JSON, Avro, Parquet, Hadoop (HDFS/Hive), Snowflake, Apache Iceberg, Sybase IQ