This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities.
Job Responsibility:
Develop and maintain data pipelines: Design, develop, and optimize scalable ETL (Extraction, Transformation, Loading) pipelines using PySpark to process large datasets
Coding and software engineering: Write clean, efficient, well-documented code primarily in Python (PySpark) and Java, often utilizing frameworks like Spring Boot
Collaboration and communication: Work with cross-functional teams, including senior developers, data engineers, analysts, and business partners, to understand data requirements and ensure seamless integration of solutions
Troubleshooting and optimization: Debug and resolve data processing issues and performance bottlenecks in Spark applications and other big data technologies
Full lifecycle involvement: Participate in the entire software development lifecycle (SDLC), from requirements analysis and design to testing, deployment, and operations, often using Agile/Scrum methodologies
Data integrity and quality: Ensure data quality and integrity throughout the data lifecycle
Requirements:
4-8 years of relevant experience in the Financial Service industry
Intermediate level experience in Applications Development role
Consistently demonstrates clear and concise written and verbal communication
Demonstrated problem-solving and decision-making skills
Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements
Bachelor’s degree/University degree or equivalent experience
Strong understanding of Core Java and Object-Oriented Programming (OOP) concepts
Proficiency in Python, specifically for PySpark development
Hands-on experience or familiarity with Apache Spark (PySpark), Hadoop, and related ecosystem components like Hive and Sqoop
Basic knowledge of SQL and relational databases
Experience writing queries to validate and manipulate data
Familiarity with cloud services such as Amazon Web Services (AWS), Azure, or Google Cloud Platform (GCP)
Understanding of version control systems (e.g., Git)
Experience with development and testing tools (e.g., JIRA, Confluence)
Nice to have:
Knowledge of distributed NoSQL databases (e.g., Elasticsearch, Cassandra, MongoDB) is a plus
Familiarity with DevOps practices and CI/CD pipelines is a bonus