This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Fixed Income Data Platform Team is the backbone of Fixed Income mission, providing the low latency, high concurrency, scalability, and availability needed to power our groundbreaking solutions. Our Data Platform Engineering team is on the cutting edge. We research, adapt, and deploy the latest open-source data platforms to meet Citi's unique needs. We're a collaborative group that thrives on technical challenges and the satisfaction of building highly performant systems. We're seeking a passionate and highly skilled senior developer to join our talented team of engineers in building and maintaining Citi's next-generation data platform targeted for quantitative scientists and traders to quickly iterate on and put into production analytical and trading tools.
Job Responsibility:
Analyzes system requirements, including identifying program interactions and appropriate interfaces between impacted components and sub systems
Participate in Sprint Planning, Tasking and Estimation of the assigned work for platform
Participate in component and service design for analytical services
Work on bug resolution and application improvements, such as performance and maintainability
May occasionally work a non-standard shift including nights and/or weekends and/or have on-call responsibilities
Stay abreast with new trends in open source tooling and champion tools that could help improve efficiency of the Fixed Income platform community
Work closely with business to help them use platform capabilities and develop efficient analytical tools
Continuously look to automate manual touchpoints in the technology delivery pipeline
Requirements:
4+ years of demonstrable and relevant experience in software development, with a strong focus on Java
In-depth knowledge and hands-on experience with Apache Flink for real-time stream processing, including Flink SQL, DataStream API, and state management
Solid understanding and practical experience with Redis, including data structures, caching patterns, and pub/sub mechanisms for high-performance applications
Hands-on experience with Large Language Models (LLMs), including fine-tuning, prompt engineering, and integrating LLMs into applications
Extensive hands-on experience with data distribution platforms like Apache Kafka, and various big data storage/querying systems (e.g., Trino, Pinot, Druid, Ignite) for low-latency access
Experience with the design and implementation of cloud-native applications and deployment via Kubernetes / OpenShift, specifically for managing data services
Good understanding of data modeling, partitioning, and sharing of huge data sets for optimal performance in large-scale data platforms
Experience working on a Continuous Integration and Continuous Delivery (CI/CD) environment, with a focus on rapid and reliable deployment of services and data access layers. Familiarity with TeamCity, SonarQube, and Jenkins
Experience with the SDLC lifecycle and in working within an Agile environment, adapting to fast-paced data requirements
Demonstrable understanding and experience of engineering best practices: design patterns, coding standards, code review, and robust unit/integration testing
Strong experience with standard CI tools (Jenkins, TeamCity, SonarQube, Git)
Bachelor’s degree/University degree or equivalent experience