This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Senior Data Engineer, you’ll design, build, and operate scalable, reliable data pipelines and data infrastructure. Your work will ensure high-quality data is accessible, trusted, and ready for analytics and data science - powering business insights and decision-making across the company.
Job Responsibility:
Build and maintain data pipelines for ingestion, transformation, and export across multiple sources and destinations
Develop and evolve scalable data architecture to meet business and performance requirements
Partner with analysts and data scientists to deliver curated, analysis-ready datasets and enable self-service analytics
Implement best practices for data quality, testing, monitoring, lineage, and reliability
Optimize workflows for performance, cost, and scalability (e.g., tuning Spark jobs, query optimization, partitioning strategies)
Ensure secure data handling and compliance with relevant data protection standards and internal policies
Contribute to documentation, standards, and continuous improvement of the data platform and engineering processes
Ensure secure, compliant handling of data and models, including access controls, auditability, and governance practices
Build and maintain MLOps automation: CI/CD for ML, environment management, artifact handling, versioning of data/models/code
Requirements:
Bachelor’s degree in Computer Science, Engineering, or a related technical field (or equivalent practical experience)
6+ years of experience as a Data Engineer, building and maintaining production-grade pipelines and datasets
Strong Python and SQL skills with a solid understanding of data structures, performance, and optimization strategies
Hands-on experience with orchestration (like Airflow, Dagster, Databricks Workflows) and distributed processing in a cloud environment
Experience with analytical data modeling (star and snowflake schemas), DWH, ETL/ELT patterns, and dimensional concepts
Experience building reliable incremental data ingestion pipelines from DBs and APIs
Familiarity with at least one major cloud provider (GCP, AWS, Azure) and deploying data solutions in the cloud
Familiarity with CI/CD for data pipelines, IaC (Terraform), and/or DataOps practices
Strong troubleshooting mindset: ability to debug issues across data, infra, pipelines, and deployments
Collaborative mindset and clear communication across engineering, analytics, and business stakeholders