CrawlJobs Logo

Graph Data Engineer

https://www.citi.com/ Logo

Citi

Location Icon

Location:
India, Pune

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

Citi is seeking a Graph Data Engineer experienced in Neo4j and related technologies to design and implement graph database solutions, optimize queries, and integrate with other systems. The candidate must be well-versed in graph theory, Cypher, and big data technologies like Hadoop and must display proficiency in creating and maintaining graph database architectures. Knowledge of BI integration and optional proficiency in Spark, Python, and Kafka is desirable.

Job Responsibility:

  • Designing and implementing graph database solutions
  • Creating and maintaining graph schemas, models, and architectures
  • Defining data models, creating nodes, relationships, and properties
  • Implementing Cypher queries
  • Migrating data from relational or other databases into Neo4j
  • Optimizing Cypher queries for performance
  • Indexing and applying query optimization techniques
  • Integrating Neo4j with BI and other systems

Requirements:

  • Neo4j expertise: Proven experience with Neo4j, including its core concepts, Cypher query language, and best practices
  • Designing and implementing graph database solutions: This includes creating and maintaining graph schemas, models, and architectures
  • Designing, developing, and deploying graph-based solutions using Neo4j
  • Defining data models, creating nodes, relationships, and properties, and implementing Cypher queries
  • Migrate data from relational or other databases into Neo4j
  • Optimizing Cypher queries for performance, ensuring efficient data retrieval and manipulation
  • Familiarity with graph theory, graph data modeling, and other graph database technologies
  • Developing and optimizing Cypher queries
  • Integrating Neo4j with BI and other systems
  • Deep hands-on expertise into building Neo4j Graph solution
  • Hadoop ecosystem big data tech stack (HDFS, YARN, MapReduce, Hive, Impala)

Nice to have:

  • Develop Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis
  • Develop and maintain Kafka-based data pipelines
  • Designing Kafka Streams, setting up Kafka Clusters
  • Ensuring efficient data flow
  • Create and optimize Spark applications using Scala and PySpark
  • Process large datasets and implement data transformations and aggregations
  • Creating and maintaining documentation for system architecture, design, and operational processes
  • Hands-on expertise in Pyspark, Scala, Kafka

Additional Information:

Job Posted:
July 08, 2025

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.