CrawlJobs Logo

Hadoop Data Engineer

https://www.hsbc.com Logo

HSBC

Location Icon

Location:
India, Pune

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

Join HSBC as a Hadoop Data Engineer and fulfil your potential in an exciting role involving the design, development, and implementation of cutting-edge data pipelines and big data solutions.

Job Responsibility:

  • Software design, Scala & Spark development, automated testing of new and existing components in an Agile, DevOps and dynamic environment
  • Promoting development standards, code reviews, mentoring, knowledge sharing
  • Production support & troubleshooting
  • Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring
  • Liaison with BAs to ensure that requirements are correctly interpreted and implemented
  • Participation in regular planning and status meetings
  • Input to the development process – through the involvement in Sprint reviews and retrospectives
  • Input into system architecture and design
  • Peer code reviews

Requirements:

  • Scala development and design using Scala 2.10+ or Java development and design using Java 1.8+
  • Experience with most of the following technologies (Apache Hadoop, Scala, Apache Spark, Spark streaming, YARN, Kafka, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services)
  • Sound knowledge on working Unix/Linux Platform
  • Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL
  • Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA
  • Understanding of big data modelling techniques using relational and non-relational techniques
  • Experience on Debugging the Code issues and then publishing the highlighted differences to the development team/Architects
  • Experience with time-series/analytics dB’s such as Elastic search
  • Experience with scheduling tools such as Airflow, Control-M
  • Understanding or experience of Cloud design patterns
  • Exposure to DevOps & Agile Project methodology such as Scrum and Kanban
  • Experience with developing Hive QL, UDF’s for analysing semi structured/structured datasets
What we offer:
  • Flexible working
  • Opportunities to grow
  • Inclusive and diverse environment
  • Continuous professional development

Additional Information:

Job Posted:
April 30, 2025

Expiration:
May 05, 2025

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.