This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We’re seeking a Senior Data Engineer who is equal parts builder, teacher, and problem solver. You will architect and deliver modern, cloud-first data platforms; mentor teammates; and drive standards that improve reliability, performance, and developer experience. You’ll work across batch and streaming patterns, structured and semi-structured data, and collaborate closely with analysts, product, and engineering teams to turn requirements into scalable, maintainable solutions.
Job Responsibility:
Design and evolve our data architecture leveraging Apache Iceberg on S3 and Snowflake, balancing performance, reliability, and cost
Lead continuous improvement of schemas, data models, pipelines, and engineering standards, own design docs and review forums
Plan and coordinate data migrations with zero/low downtime patterns, including backfills, cutovers, and data validation
Standardize data contracts and enforce quality checks throughout pipelines and transformations
Implement and operate Iceberg tables: catalog strategy (AWS Glue), partition transforms, schema evolution, and time-travel/snapshot management
Optimize data layout (partitioning, clustering, file sizing, compaction) to improve read/write performance and control costs across engines
Build and maintain batch and streaming pipelines using Airflow, AWS Glue, Step Functions, Lambda, Kinesis, and Snowflake
Design normalized and dimensional models
apply partitioning and clustering strategies appropriate to Iceberg and target engines
Own SQL and Spark performance tuning, job optimization, and cost governance (e.g., Snowflake warehouse sizing, query profile analysis)
Establish SLAs, lineage, tests, alerts, and runtime metrics
integrate data quality checks into CI/CD and orchestration
Mentor and pair program
elevate craftsmanship, testing, reliability, and operational excellence
Promote security best practices, governance, and compliance-by-default patterns for sensitive data
Provide documentation, code examples, and training that enable partners to self-serve
champion code reviews and design best practices
Requirements:
4+ years of professional data engineering experience delivering production-grade pipelines and data platforms
Strong problem-solving and analytical skills
track record of decomposing complex problems and shipping pragmatic solutions