This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Goldman Sachs is seeking a Data Engineer to join their datastore-migration Factory team. This role involves migrating data from on-prem DataLake to AWS LakeHouse, ensuring data integrity and optimizing consumption patterns. Candidates should have a Bachelor’s or Master’s degree in Computer Science or a related field, with 3-5 years of experience in a collaborative environment. Proficiency in Python, Kafka, and Snowflake is essential. The position requires strong stakeholder engagement and a rigorous approach to data validation. If you are ready to take on this high-visibility project, apply now!
Job Responsibility:
Pipeline migration
Logic & Scheduling: Refactoring and migrating extraction logic and job scheduling from legacy frameworks to the new Lakehouse environment
Data Transfer: Executing the physical migration of underlying datasets while ensuring data integrity
Stakeholder Engagement: Acting as a technical liaison to internal clients, facilitating handoff and sign-off conversations with data owners to ensure migrated assets meet business requirements
Consumption Pattern Migration: Code Conversion: Translating and optimizing legacy SQL and Spark-based consumption patterns for compatibility with Snowflake and Iceberg
Usage analysis: Understand usage patterns to deliver the required data products
Data Reconciliation & Quality: A rigorous approach to data validation, working with reconciliation frameworks
Requirements:
Bachelor's or Master's degree in Computer Science, Applied Mathematics, Engineering, or a related quantitative field
Minimum of 3-5 years of professional hands-on-keyboard coding experience in a collaborative, team-based environment
Ability to trouble shoot (SQL) and basic scripting experience
Professional proficiency in Python or Java
Deep familiarity with the full Software Development Life Cycle (SDLC) and CI/CD best practices & K8s deployment experience
Understanding of Temporal Data Modeling, Schema Management, Performance Optimization, Architectural Theory
Knowledge of Kafka, ANSI SQL, FTP, Apache Spark
Knowledge of data formats JSON, Avro, Parquet
Knowledge of platforms Hadoop (HDFS/Hive), Snowflake, Apache Iceberg, Sybase IQ