CrawlJobs Logo

Big Data Engineering Lead

https://www.citi.com/ Logo

Citi

Location Icon

Location:
India, Chennai

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

The Senior Big Data engineering lead will play a pivotal role in designing, implementing, and optimizing large-scale data processing and analytics solutions. This role requires a visionary leader who can drive innovation, define architecture strategy, and ensure the scalability and efficiency of our big data infrastructure.

Job Responsibility:

  • Lead the design and development of a robust and scalable big data architecture handling exponential data growth while maintaining high availability and resilience
  • Design complex data transformation processes using Spark and other big data technologies using Java, Pyspark or Scala
  • Design and implement data pipelines that ensure data quality, integrity, and availability
  • Collaborate with cross-functional teams to understand business needs and translate them into technical requirements
  • Evaluate and select technologies that improve data efficiency, scalability, and performance
  • Oversee the deployment and management of big data tools and frameworks such as Hadoop, Spark, Kafka, and others
  • Provide technical guidance and mentorship to the development team and junior architects
  • Continuously assess and integrate emerging technologies and methodologies to enhance data processing capabilities
  • Optimize big data frameworks, such as Hadoop, Spark, for performance improvements and reduced processing time across distributed systems
  • Innovate in designing, developing, and refining data pipeline architectures to enhance data flow and ensure data processing capabilities
  • Implement data governance frameworks to ensure data accuracy, consistency, and privacy across the organization, leveraging metadata management and data lineage tracking
  • Conduct benchmarking and stress testing of big data solutions to validate performance standards and operational capacity
  • Ensure compliance with data security best practices and regulations.

Requirements:

  • Bachelor's or Master’s degree in Computer Science, Information Technology, or related field
  • Atleast 10 -12 years overall software development experience on majorly working with handling application with large scale data volumes from ingestion, persistence and retrieval
  • Deep understanding of big data technologies, including Hadoop, Spark, Kafka, Flink, NoSQL databases, etc
  • Experience with Bigdata technologies Developer Hadoop, Apache Spark, Python, PySpark
  • Strong programming skills in languages such as Java, Scala, or Python
  • Excellent problem-solving skills with a knack for innovative solutions
  • Strong communication and leadership abilities
  • Proven ability to manage multiple projects simultaneously and deliver results.

Nice to have:

  • Experience with data modeling and ETL/ELT processes
  • Experience in moving ETL frameworks from proprietary ETL technologies like Abinitio to Apache Spark
  • Familiarity with machine learning and data analytics tools
  • Knowledge of core banking/financial services systems.

Additional Information:

Job Posted:
August 15, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.