This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a Data Engineer responsible for designing, building, and maintaining scalable data pipelines and analytics solutions. This role focuses on enabling reliable data integration, ensuring high data quality, and supporting advanced analytics across the organization.
Job Responsibility:
Design, develop, and maintain data pipelines and ETL/ELT processes
Integrate data from multiple sources into centralized data platforms
Ensure data quality, integrity, and consistency across systems
Build and optimize data models and data warehousing solutions
Support analytics and reporting by delivering clean, curated datasets
Collaborate with cross-functional teams including data scientists, analysts, and software engineers
Monitor and troubleshoot data workflows and pipeline performance
Requirements:
Strong foundation in software engineering with proficiency in SQL and Python
ETL/ELT tools and frameworks
Experience with database platforms: RDBMS: PostgreSQL, SQL Server, MySQL, Oracle
NoSQL: MongoDB, Neo4j
Hands-on experience building multi-source data pipelines (e.g., Qlik, AWS Glue)
Experience working with cloud platforms, preferably AWS
Solid understanding of data warehousing concepts
Solid understanding of data quality and data governance practices
Programming experience in Java or C++
Familiarity with machine learning frameworks (TensorFlow, PyTorch, scikit-learn)
Knowledge of DevOps practices and tools (GitHub Actions, Terraform)
Experience with containerization and deployment (Docker, Kubernetes)
Exposure to API development
Experience with big data technologies (Hadoop, Spark)