This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a skilled Data Engineer to join our team in Carmel, Indiana. In this long-term contract role, you will design, build, and optimize data pipelines and systems to support business needs. The ideal candidate will bring expertise in data engineering tools and frameworks, along with a passion for solving complex challenges.
Job Responsibility:
Develop and maintain robust data pipelines using modern frameworks and tools
Implement ETL processes to ensure accurate and efficient data transformation
Optimize data storage and retrieval systems for performance and scalability
Collaborate with cross-functional teams to understand data requirements and deliver solutions
Utilize Apache Spark and Hadoop for large-scale data processing
Work with Databricks to streamline data workflows and enhance analytics
Apply machine learning techniques using tools like scikit-learn and Pandas
Integrate Kafka for real-time data streaming and processing
Analyze and troubleshoot data-related issues to ensure system reliability
Document processes and workflows to support future development and maintenance
Requirements:
At least 2 years of experience in data engineering or a related field
Proficiency in Python for data manipulation and analysis
Hands-on experience with Apache Spark and Hadoop
Knowledge of ETL processes and tools
Familiarity with Databricks for data pipeline optimization
Experience with machine learning libraries such as scikit-learn and Pandas
Understanding of Kafka for real-time data streaming
Strong problem-solving skills and attention to detail