This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Big Data/Java Application Developer is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities.
Job Responsibility:
Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas
Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users
Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement
Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality
Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems
Ensure essential procedures are followed and help define operating standards and processes
Serve as advisor or coach to new or lower level analysts
Has the ability to operate with a limited level of direct supervision.
Can exercise independence of judgement and autonomy.
Experience managing an data focused product, ML platform and or UI/UX
Responsible to build UI components using Angular, HTML, CSS Java, Spring boot, Oracle, NoSQL OR Design, develop, and optimize scalable distributed data processing pipelines using Apache Spark and Scala..
Acts as SME to senior stakeholders and /or other team members.
Experience with large-scale distributed web services and the processes around testing, monitoring, and SLAs to ensure high product quality.
Design and Develop Scalable Data Pipelines: Lead the design, development, and maintenance of high-performance, scalable data pipelines using Apache Spark, Scala, and Spark to handle large-scale datasets in the financial industry.
ETL Process Implementation: Implement ETL (Extract, Transform, Load) processes for data integration, transforming complex data from multiple sources into structured, actionable insights.
Data Optimization and Performance Tuning: Monitor, troubleshoot, and optimize the performance of data pipelines and applications, ensuring high availability, low-latency, and efficient resource usage.
Data Workflow Orchestration: Use Apache Airflow to orchestrate and automate complex data workflows, ensuring seamless integration and efficient execution of tasks across systems.
Real-Time Data Processing: Integrate real-time data streaming solutions using Apache Kafka for processing and managing large volumes of data in real-time.
Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.
Requirements:
5+ years of relevant experience
Experience in systems analysis and programming of software applications
Experience in managing and implementing successful projects
Working knowledge of consulting/project management techniques/methods
Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements
Hands on relevant experience in Angular, HTML, CSS Java, Spring boot, Oracle, NoSQL OR Design, develop, and optimize scalable distributed data processing pipelines using Apache Spark and Scala.
Proficiency in Functional Programming: High proficiency in Scala-based functional programming for developing robust and efficient data processing pipelines.
Proficiency in Big Data Technologies: Strong experience with Apache Spark, Hadoop ecosystem tools such as Hive, HDFS, and YARN.
Programming and Scripting: Advanced knowledge of Scala and a good understanding of Python for data engineering tasks.
Data Modeling and ETL Processes: Solid understanding of data modeling principles and ETL processes in big data environments.
Analytical and Problem-Solving Skills: Strong ability to analyze and solve performance issues in Spark jobs and distributed systems.
Version Control and CI/CD: Familiarity with Git, Jenkins, and other CI/CD tools for automating the deployment of big data applications.
Nice to have:
Real-Time Data Streaming: Experience with streaming platforms such as Apache Kafka or Spark Streaming.
Financial Services Context: Familiarity with financial data processing, ensuring scalability, security, and compliance requirements.
Leadership in Data Engineering: Proven ability to work collaboratively with teams to develop robust data pipelines and architectures.
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.