This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
This long-term contract position offers an exciting opportunity to work in the manufacturing industry, leveraging your expertise in data processing and engineering. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.
Job Responsibility:
Develop and maintain scalable data pipelines using tools such as Apache Spark and Python
Design efficient ETL processes to extract, transform, and load data from various sources
Collaborate with cross-functional teams to understand data requirements and deliver actionable insights
Implement and manage big data solutions using Apache Hadoop and Apache Kafka
Monitor and optimize the performance of data systems to ensure reliability and scalability
Ensure data quality and integrity through rigorous testing and validation processes
Troubleshoot and resolve issues related to data pipelines and infrastructure
Maintain documentation for data workflows and processes to ensure clarity and consistency
Stay updated on emerging technologies and best practices in data engineering to continuously improve systems
Requirements:
Proficiency in Apache Spark for large-scale data processing
Strong programming skills in Python for data manipulation and automation
Hands-on experience with Apache Hadoop for distributed storage and processing
Familiarity with Apache Kafka for real-time data streaming
Expertise in designing and executing ETL processes to handle complex data transformations
Solid understanding of data quality principles and validation techniques
Excellent problem-solving abilities and attention to detail
Strong communication skills for effective collaboration with team members
What we offer:
medical, vision, dental, and life and disability insurance