This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Data Engineer role at Goldman Sachs involves migrating data from on-prem DataLake to an AWS LakeHouse. Candidates should have 3-5 years of experience in data engineering, with proficiency in Python, Kafka, and Snowflake. A Bachelor’s or Master’s degree in a quantitative field is required. Responsibilities include pipeline migration, data transfer, and stakeholder engagement. Strong data modeling and performance optimization skills are essential. This is a high-visibility project that requires collaboration across teams and effective communication with stakeholders.
Job Responsibility:
Pipeline Migration: Refactoring and migrating extraction logic and job scheduling from legacy frameworks to the new Lakehouse environment
Data Transfer: Executing the physical migration of underlying datasets while ensuring data integrity
Stakeholder Engagement: Acting as a technical liaison to internal clients, facilitating handoff and sign-off conversations with data owners to ensure migrated assets meet business requirements
Consumption Pattern Migration: Code Conversion - Translating and optimizing legacy SQL and Spark-based consumption patterns (raw and modeled) for compatibility with Snowflake and Iceberg
Usage analysis: Understand usage patterns to deliver the required data products
Data Reconciliation & Quality: Work with reconciliation frameworks to build confidence that migrated data is functionally equivalent to that already used within production flows
Requirements:
Bachelor’s or Master’s degree in Computer Science, Applied Mathematics, Engineering, or a related quantitative field
Minimum of 3-5 years of professional hands-on-keyboard coding experience in a collaborative, team-based environment
Ability to troubleshoot (SQL) and basic scripting experience
Professional proficiency in Python or Java
Deep familiarity with the full Software Development Life Cycle (SDLC) and CI/CD best practices & K8s deployment experience
Sophisticated understanding of Temporal Data Modeling (e.g., SCD Type 2), Schema Management/Evolution (Iceberg Apache), Performance Optimization (data partitioning and clustering), Architectural Theory (Normalization vs. Denormalization, Natural vs. Surrogate Keys)