This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a skilled Data Engineer with strong expertise in Java and Apache Spark, specializing in data ingestion and large-scale data processing. The ideal candidate will design and build scalable, high-performance data pipelines and contribute to modern analytics platforms in a fast-paced Agile environment. This role requires hands-on experience in building ingestion frameworks, optimizing Spark workloads, and working with cloud-based data ecosystems.
Job Responsibility:
Design, develop, and maintain scalable data ingestion pipelines using Java and Apache Spark
Build and optimize Spark jobs (Spark Core, Spark SQL, DataFrames, Streaming) for large-scale batch and real-time processing
Develop reusable ingestion frameworks for structured and semi-structured data from multiple sources (APIs, databases, files, streaming systems)
Implement high-performance ETL/ELT solutions with strong focus on data quality, reliability, and scalability
Collaborate with data architects, analysts, and cross-functional teams to design robust data workflows
Optimize Spark performance (partitioning, caching, tuning, memory management) for production environments
Contribute to CI/CD pipelines, code reviews, and best practices in data engineering
Troubleshoot data pipeline failures and implement monitoring and alerting mechanisms
Document technical designs and mentor junior engineers
Requirements:
4–7 years of strong hands-on experience in Data Engineering and Java development