This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
About MediaRadar MediaRadar, An Industry Leader in Marketing Intelligence now including the data and capabilities of Vivvix, powers the mission-critical marketing and sales decisions that drive competitive advantage. Our next-generation marketing intelligence platform enables clients to achieve peak performance with always-on insights that span the media, creative, and business strategies of 5 million brands across 30+ media channels and 275 billion in media spend. By bringing the advertising past, present, and future into focus, our clients rapidly act on the competitive moves and emerging advertising trends impacting their business. About the Role We are looking for an experienced and strategic Data Engineer to join our data team. In this role, you will be responsible for building and maintaining scalable, high-performance data solutions using Azure Databricks, Apache Spark, AKS, Airflow, Postgres, and modern data lakehouse architectures. You’ll play a key role in the full software development lifecycle—from design and implementation to deployment and documentation—while collaborating cross-functionally to support analytics, reporting, and operational data needs. This is an exciting opportunity to work along with a great team of data engineers, demanding technologies and an engaging work environment to help shape our data engineering best practices.
Job Responsibility:
Involve in Design, development, and maintenance of scalable ETL/ELT pipelines on Azure Databricks using Apache Spark (PySpark/Spark SQL)
Design and implement both batch and real-time data ingestion and transformation processes
Build and manage Delta Lake tables, schemas, and data models to support efficient querying and analytics
Consolidate and process large-scale datasets from various structured and semi-structured sources (e.g., JSON, Parquet, Avro)
Write optimized SQL queries for large datasets using Spark SQL and PostgreSQL
Develop, schedule, and monitor workflows using Databricks Workflows, Airflow or similar orchestration tools
Design, build, and deploy cloud-native, containerized applications on Azure Kubernetes Service (AKS) and integrate with Azure services
Ensure data quality, governance, and compliance through validation, documentation, and secure practices
Collaborate with data analysts, data architects, and business stakeholders to translate requirements into technical solutions
Contribute to and enforce best practices in data engineering, including version control (Git), CI/CD pipelines, and coding standards
Continuously enhance data systems for improved performance, reliability, and scalability
Mentor junior engineers and help evolve team practices and documentation
Stay up to date on emerging trends, technologies, and best practices in the data engineering space
Work effectively within an agile, cross-functional project team
Requirements:
A Bachelor’s degree (or equivalent) in computer science, information technology, engineering, or related discipline
Minimum 5+ years of experience working as a Data Engineering
Minimum 3-5 years of experience in Azure Databricks
Proven experience as a Data Engineer, with a strong focus on Azure Databricks and Apache Spark
Proficiency in Python, PySpark, Spark SQL, and working with large-scale datasets in different data formats
Strong experience designing and building ETL/ELT workflows in both batch and streaming environments
Solid understanding of data lakehouse architectures and Delta Lake
Experience in Azure Kubernetes Service (AKS) is desired
Proficient in SQL and experience with PostgreSQL or similar relational databases
Experience with workflow orchestration tools (e.g., Databricks Workflows, Airflow, Azure Data Factory)
Familiarity with data governance, quality control, and security best practices
Strong problem-solving skills and attention to detail
Excellent communication and collaboration skills, with a track record of working cross-functionally
Comfortable working in agile development environments and using tools like Git, CI/CD, and issue trackers (e.g., Jira)