This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Our data engineering team is looking for an experienced professional with expertise in SQL, Python, and strong data modeling skills. In this role, you will be at the heart of our data ecosystem, designing and maintaining cross-engineering initiatives and projects, as well as developing high-quality data pipelines and models that drive decision-making across the organization. You will play a key role in ensuring data quality, building scalable systems, and supporting cross-functional teams with clean, accurate, and actionable data.
Job Responsibility:
Design, develop, and optimize data services and solutions required to support various company products (like FeatureStore or synthetic data management)
Work closely with data analysts, data scientists, engineers, and cross-functional teams to understand data requirements and deliver high-quality solutions.
Design and integrate LLM- and agent-based capabilities into data platforms and services, enabling smarter data operations and AI-driven data products.
Design, develop, and optimize scalable data pipelines to ensure data is clean, accurate, and ready for analysis.
Build and maintain robust data models that support clinical, business intelligence, and operational needs.
Implement and enforce data quality standards, monitoring, and best practices across systems and pipelines.
Manage and optimize large-scale data storage and processing systems to ensure reliability and performance.
Requirements:
5+ years of experience as a Data Engineer / Backend Engineer (with strong emphasis on data processing)
Python Proficiency: Proven ability to build services, solutions, data pipelines, automations, and integration tools using Python.
SQL Expertise: Deep experience in crafting complex queries, optimizing performance, and working with large datasets.
Strong knowledge of data modeling principles and best practices for relational and dimensional data structures.
A passion for maintaining data quality and ensuring trust in business-critical data.
Nice to have:
Experience with cloud platforms (e.g., AWS, GCP, or Azure) for data engineering and storage solutions.
Knowledge of data orchestration tools (e.g., Airflow)
Understanding of CI/CD practices and version control systems.
Experience with Kubernetes.
Experience in ML/AI development.
What we offer:
Competitive compensation packages based on industry benchmarks for function, level, and geographic location.