This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a skilled and experienced Data Engineer to join our team in Pune. The ideal candidate will have a strong foundation in data engineering principles and hands-on expertise with AWS services such as Glue, EMR, and Spark. You will be responsible for designing, building, and maintaining scalable data pipelines and solutions that support business analytics and decision-making
Job Responsibility:
Design, develop, and maintain robust ETL/ELT pipelines using AWS Glue and Spark
Work with large-scale distributed data processing systems on AWS EMR
Optimize data workflows for performance, scalability, and reliability
Collaborate with data scientists, analysts, and business stakeholders to understand data requirements
Ensure data quality, integrity, and governance across all pipelines
Implement best practices in data modeling, storage, and transformation
Monitor and troubleshoot data pipeline issues in production environments
Maintain documentation for data architecture, processes, and workflows
Requirements:
5–10 years of experience in data engineering or related roles
Strong hands-on experience with AWS Glue, EMR, and Spark
Proficiency in Python or Scala for data processing
Solid understanding of data engineering concepts including data warehousing, partitioning, schema design, and performance tuning
Experience with cloud-based data lakes, S3, and Redshift is a plus
Familiarity with CI/CD pipelines and version control (e.g., Git)
Excellent problem-solving and communication skills
Bachelor's or master's degree in computer science, Engineering, or related field
Nice to have:
AWS Certification (e.g., AWS Certified Data Analytics – Specialty)