This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Our client develops and manufactures technologically advanced products from concept through production, and they are seeking a Data Engineer to help accelerate the modernization of their data platform. You will work across on-prem and cloud environments to operationalize and expand the enterprise Data Lake. If you enjoy turning complex data into trusted insight, you’ll fit right in. As a Data Engineer, you will own ingestion and transformation workflows, collaborate with software engineers and domain experts, and enable analytics, AI, and decision-making across the organization.
Job Responsibility:
Build and operate batch, streaming, and change data capture (CDC) pipelines from diverse sources (ERP, CRM, Accounting, knowledge repositories, and other enterprise systems) into the data lake
co-design data interfaces and pipelines with software engineers and technical leads, ensuring alignment with application domain models and product roadmaps
model curated data within the lake into data warehouse structures (star schemas, wide tables, semantic layers) optimized for BI, ad-hoc analytics, and KPI reporting
publish certified datasets and policy-aware retrieval assets (tables, document embeddings, vector indexes) to enable analytics, AI, and retrieval-augmented generation (RAG) use cases
establish robust data observability and quality checks to ensure reliability and consistency
apply governance, security, and compliance controls across the data lake and warehouse—including role-based access, encryption, auditing, and data retention—in alignment with applicable regulations
operate the platform reliably by orchestrating jobs, monitoring pipelines, and continuously tuning cost and performance
work in accordance with our client’s values, demonstrating collaboration, continuous improvement, and technical excellence in every aspect of data engineering
Requirements:
Bring 5–8+ years of experience building production-grade data systems
strong cloud data lake and data warehouse knowledge
hands-on Python and SQL skills with distributed processing frameworks like Apache Spark
expertise in designing and implementing ETL/ELT workflows and data modeling techniques
familiarity in AWS, Databricks, Snowflake, and open table formats such as Iceberg, Delta, or Hudi
communicate clearly, work well with cross-functional teams, and thrive in a fast-paced environment