This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We're seeking a talented Senior Big Data Engineer to join our data engineering team. You'll play a critical role in optimizing our existing data pipelines and building new high-performance solutions that power analytics and business intelligence across the department. This is an opportunity to work with large-scale distributed systems and make a measurable impact on data processing efficiency.
Job Responsibility:
Analyze and optimize existing Hadoop/Spark pipelines to improve processing speed, resource utilization, and reliability
Identify bottlenecks in data workflows and implement solutions that reduce processing time and costs
Tune Spark jobs, Hive queries, and Impala performance through partitioning strategies, caching, and execution plan optimization
Design and build scalable data pipelines using Spark (Scala) to process terabytes of data efficiently
Develop robust ETL/ELT workflows that integrate data from multiple sources into Hadoop environment and Oracle data warehouses
Implement data quality checks and monitoring to ensure pipeline reliability
Work closely with product teams to understand requirements and deliver data solutions
Participate in code reviews and contribute to engineering best practices
Document pipeline architecture, data flows, and operational procedures
Requirements:
6+ years of hands-on experience with Hadoop ecosystem technologies
Experience in managing and implementing successful projects
Strong proficiency in Apache Spark with Scala development
Solid experience with Hive and Impala for large-scale data querying
Understanding of distributed computing principles and data partitioning strategies
Experience optimizing Spark jobs and SQL queries for performance
Proficiency with version control (Git) and CI/CD practices
Proficiency with streaming frameworks (Kafka)
Bachelor’s degree/University degree or equivalent experience
Nice to have:
Exposure to cloud platforms and technologies is preferred
Exposure to Databricks is preferred
Master’s degree preferred
What we offer:
medical, dental & vision coverage
401(k)
life, accident, and disability insurance
wellness programs
paid time off packages, including planned time off (vacation), unplanned time off (sick leave), and paid holidays
discretionary and formulaic incentive and retention awards