This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a Data Engineer to spearhead the architecture and optimization of the data foundation that underpins our core optimization platform. In this role, you will partner with our engineering and analytics squads to construct resilient, high-efficiency data pipelines that fuel our machine learning models and internal analytics. Your technical contributions will be the backbone of our decision-making process, ensuring the business relies on high-integrity, available data.
Job Responsibility:
Architect and sustain scalable ETL workflows, guaranteeing consistency and accuracy across diverse data origins
Refine and optimize data models and database structures specifically tailored for reporting and analytics
Enforce industry best practices regarding data warehousing and storage methodologies
Fine-tune data systems to handle the demands of both real-time streams and batch processing
Oversee and manage the cloud data environment, utilizing platforms such as AWS, Azure, or GCP
Coordinate with software engineers to embed data solutions directly into our product suite
Design robust processes for ingesting both structured and unstructured datasets
Script automated quality checks and deploy monitoring instrumentation to instantly detect data anomalies
Build APIs and services that ensure seamless data interoperability between systems
Continuously monitor pipeline health, troubleshooting bottlenecks to maintain an uninterrupted data flow
Embed data governance and security protocols that meet rigorous industry standards
Collaborate with data scientists and analysts to maximize the usability and accessibility of our data assets
Maintain comprehensive documentation covering schemas, transformations, and pipeline architecture
Keep a pulse on emerging trends in cloud tech, analytics, and data engineering to drive continuous improvement
Requirements:
Bachelor’s or Master’s degree in Engineering, Computer Science, Data Science, or a relevant discipline
A minimum of 3 years of professional experience in Data Engineering or a similar technical role
Expert-level command of SQL and management systems like PostgreSQL or MySQL
Hands-on proficiency with pipeline tools such as Luigi, DBT, or Apache Airflow
Practical experience with heavy-lifting technologies like Hadoop, Spark, or Kafka
Proven skills with cloud data stacks, specifically Google BigQuery, AWS Redshift, or Azure Data Factory
Strong programming logic in Java, Scala, or Python for data processing tasks
Familiarity with data integration frameworks and API utilization
Understanding of security best practices and compliance frameworks
Exceptional problem-solving capabilities with a rigorous eye for detail
The ability to collaborate effectively and communicate complex ideas clearly
Comfortable navigating a high-velocity environment while juggling competing priorities
A proactive, self-starting attitude with a deep sense of accountability
A genuine enthusiasm for leveraging data to unlock tangible business value