This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Goldman Sachs is seeking a Data Engineer to join their datastore-migration Factory team. This role involves migrating data from on-prem DataLake to AWS LakeHouse, ensuring data integrity and optimizing consumption patterns. Candidates should have a Bachelor’s or Master’s degree in Computer Science or a related field, with 3-5 years of experience in a collaborative environment. Proficiency in Python, Kafka, and Snowflake is essential. The position requires strong stakeholder engagement and a rigorous approach to data validation. If you are ready to take on this high-visibility project, apply now!
Job Responsibility:
Perform end-to-end datastore migration from on-prem DataLake to AWS hosted LakeHouse
Refactoring and migrating extraction logic and job scheduling from legacy frameworks to the new Lakehouse environment
Executing the physical migration of underlying datasets while ensuring data integrity
Acting as a technical liaison to internal clients, facilitating handoff and sign-off conversations with data owners to ensure migrated assets meet business requirements
Translating and optimizing legacy SQL and Spark-based consumption patterns (raw and modeled) for compatibility with Snowflake and Iceberg
Understanding usage patterns to deliver the required data products
Working with reconciliation frameworks to build confidence that migrated data is functionally equivalent to that already used within production flows
Requirements:
Bachelor's or Master's degree in Computer Science, Applied Mathematics, Engineering, or a related quantitative field
Minimum of 3-5 years of professional hands-on-keyboard coding experience in a collaborative, team-based environment
Ability to troubleshoot (SQL) and basic scripting experience
Professional proficiency in Python or Java
Deep familiarity with the full Software Development Life Cycle (SDLC) and CI/CD best practices & K8s deployment experience
Sophisticated understanding of Temporal Data Modeling (SCD Type 2)
Expertise in Schema Evolution (Iceberg Apache) and enforcement strategies
Advanced knowledge of data partitioning and clustering
Balancing Normalization vs. Denormalization and strategic use of Natural vs. Surrogate Keys
Candidates must demonstrate strong stakeholder engagement
Candidates must work with reconciliation frameworks
Ability to learn new workflows and language constructs