CrawlJobs Logo

Lead Data Analytics Analyst

https://www.citi.com/ Logo

Citi

Location Icon

Location:
India, Bengaluru

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

The Lead Data Analytics analyst is responsible for managing, maintaining, and optimizing on-premises and cloud-based big data infrastructure. The individual should have expertise in administering big data platforms, ensuring high availability, performance, and security of data systems, and supporting data engineering and analytics teams. This role requires hands-on experience with both on-premises big data technologies (e.g., Hadoop, Spark) and cloud-based platforms (e.g., AWS, Azure, GCP).

Job Responsibility:

  • Lead day to day operation and support for Cloudera Hadoop ecosystem components (HDFS, YARN, Hive, Impala, Spark, HBase, etc)
  • Troubleshoot issues related to data ingestion, job failures, performance degradation and service unavailability
  • Monitor cluster health using Cloudera Manager and respond to alerts, logs, and metrics
  • Collaborate with engineering teams to analyze root causes and implement preventive measures
  • Collaborate patching, service restarts, failovers and rolling restarts for cluster maintenance
  • Assist in user onboarding, access control and issues in accessing the cluster services
  • Contribute to documentation for knowledge base
  • Work on data recovery, replication, and backup support tasks
  • Responsible for moving all legacy workloads to cloud platform
  • Ability to research and assess open-source technologies, public cloud tech stack (AWS/GCP) components to recommend and integrate into the design and implementation
  • Be the technical expert and mentor other team members on Big Data and Cloud Tech stacks
  • Define needs around maintainability, testability, performance, security, quality and usability for data platform
  • Drive implementation, consistent patterns, reusable components, and coding standards for data engineering processes
  • Tune Big data applications on Hadoop and non-Hadoop platforms for optimal performance
  • Evaluate new IT developments and evolving business requirements and recommend appropriate systems alternatives and/or enhancements to current systems by analyzing business processes, systems and industry standards
  • Applies in-depth understanding of how data analytics collectively integrate within the sub-function as well as coordinates and contributes to the objectives of the entire function
  • Produces detailed analysis of issues where the best course of action is not evident from the information available, but actions must be recommended/taken
  • Supervise day-to-day staff management issues, including resource management, work allocation, mentoring/coaching and other duties and functions as assigned
  • Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency

Requirements:

  • 10+ years of total IT experience
  • 5+ years of experience in supporting Hadoop (Cloudera)/big data technologies
  • 5+ years of experience in public cloud infrastructure (AWS or GCP)
  • Experience with Kubernetes and cloud-native technologies
  • Experience with all aspects of DevOps (source control, continuous integration, deployments, etc.)
  • Advanced knowledge of the Hadoop ecosystem and Big Data technologies
  • Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr)
  • Knowledge of troubleshooting techniques for Hive, Spark, YARN, Kafka, and HDFS
  • Advanced Linux system administration and scripting skills (Shell, Python)
  • Experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Spark with Java or Scala or Python
  • Expert level building pipelines using Apache Spark
  • Familiarity with core provider services from AWS, Azure or GCP, preferably having supported deployments on one or more of these platforms
  • Hands-on experience with Python/Pyspark/Scala is required
  • System level understanding - Data structures, algorithms, distributed storage & compute
  • Can-do attitude on solving complex business problems, good interpersonal and teamwork skills
  • Possess team management experience and have led a team of data/platform engineers and analysts
  • Experience in Snowflake or Delta lake is a plus

Nice to have:

Experience in Snowflake or Delta lake

Additional Information:

Job Posted:
June 19, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.