CrawlJobs Logo

Senior Data Engineer

https://www.citi.com/ Logo

Citi

Location Icon

Location:
India, Pune

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are seeking a highly skilled and experienced Senior Data Engineer to join our team. The ideal candidate will have strong expertise in Python, PySpark, Apache Iceberg, Flink, and Kafka, with additional exposure to API development and integration. This role involves designing, building, and optimizing scalable data pipelines and systems to support our data-driven initiatives.

Job Responsibility:

  • Design, develop, and maintain scalable and efficient data pipelines using Python and PySpark
  • Implement real-time and batch data processing workflows leveraging Apache Flink and Kafka
  • Work with Apache Iceberg to manage large-scale datasets, including schema evolution, partitioning, and time-travel queries
  • Optimize data storage and retrieval for performance and scalability
  • Develop and integrate APIs to enable seamless data access and interaction with downstream systems
  • Ensure APIs are secure, scalable, and optimized for performance
  • Build and manage real-time data streaming solutions using Kafka
  • Implement event-driven architectures to support business use cases
  • Ensure data quality, consistency, and security across all pipelines and systems
  • Implement monitoring and alerting mechanisms to track data pipeline performance and reliability
  • Collaborate with cross-functional teams, including data scientists, analysts, and DevOps engineers
  • Mentor junior engineers and contribute to best practices in data engineering

Requirements:

  • Proficiency in Python and PySpark for data processing and pipeline development
  • Strong understanding of distributed data processing frameworks
  • Hands-on experience with Apache Iceberg for managing large-scale datasets
  • Expertise in Apache Flink for real-time data processing
  • Proficiency in Kafka for data streaming and messaging
  • Experience in developing and integrating RESTful APIs
  • Knowledge of API security, scalability, and performance optimization
  • Strong understanding of ETL/ELT processes and data transformation techniques
  • Familiarity with data lakehouse architectures and best practices
  • Experience with CI/CD pipelines for deploying and managing data pipelines
  • Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes

Nice to have:

  • Experience with cloud platforms (AWS, Azure, or GCP) for data engineering solutions
  • Knowledge of data governance frameworks and compliance requirements
  • Familiarity with monitoring tools like Prometheus, Grafana, or similar

Additional Information:

Job Posted:
August 12, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.