CrawlJobs Logo

Data Engineer

https://www.citi.com/ Logo

Citi

Location Icon

Location:
India, Chennai

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Data Engineer position responsible for design, development implementation and maintenance of data flow channels and data processing systems that support collection, storage, batch and real-time processing, and analysis of information in scalable, repeatable, and secure manner in coordination with Data & Analytics team.

Job Responsibility:

  • Design, develop, implement and maintain data flow channels and data processing systems
  • Support collection, storage, batch and real-time processing, and analysis of information
  • Ensuring high quality software development with complete documentation and traceability
  • Develop and optimize scalable Spark Java-based data pipelines for processing financial data
  • Design and implement distributed computing solutions for risk modeling, pricing and regulatory compliance
  • Ensure efficient data storage and retrieval using Big Data
  • Implement best practices for spark performance tuning
  • Maintain high code quality through testing, CI/CD pipelines and version control
  • Work on batch processing frameworks for Market risk analytics
  • Work with business stakeholders and Business Analysts to understand requirements
  • Work with other data scientists to understand and interpret complex datasets

Requirements:

  • 9-12 years of experience in data ecosystems
  • 5-8 years hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big Data frameworks
  • 4+ years experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc)
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.)
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization)
  • Experience building and optimizing 'big data' data pipelines, architectures, and datasets
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
  • Experience with external cloud platform such as OpenShift, AWS & GCP
  • Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos)
  • Bachelor's/University degree or equivalent experience in computer science, engineering, or similar domain

Nice to have:

  • Experience in banking & finance domain
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Highly effective interpersonal and communication skills with tech/non-tech stakeholders
  • Ability to work in a fast-paced financial environment

Additional Information:

Job Posted:
August 23, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.