This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Lead moderately complex initiatives and deliverables within technical domain environments
Contribute to large scale planning of strategies
Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments
Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures
Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements
Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals
Lead projects and act as an escalation point, provide guidance and direction to less experienced staff
Requirements:
4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education
4+ years of hands‑on experience building ETL/ELT pipelines on big‑data platforms such as Apache Spark, Hadoop, and Hive
4+ years of data engineering experience using PySpark/Python, Hadoop ecosystem tools, Hive, and/or Scala
Strong experience (4+ years) with RDBMS and SQL‑based data modeling
3+ years of experience with UNIX/Linux environments and Shell scripting
2+ years of experience leading technical initiatives, mentoring engineers, and providing solution‑level guidance
Solid understanding of data engineering best practices, including performance tuning, data quality, testing, and observability
Experience or strong interest in Generative AI / AI‑driven data solutions, including working with LLMs, AI pipelines, or intelligent analytics use cases.
Nice to have:
Design, develop, and maintain large‑scale data engineering solutions for credit risk data using modern big‑data and distributed computing frameworks
Lead data platform modernization initiatives, including performance optimization, scalability, reliability, and security
Build and optimize ETL/ELT pipelines for structured and semi‑structured data using Spark, Hadoop, and cloud‑based technologies
Develop robust data models, analytics layers, and reporting datasets to support credit risk analysis and regulatory reporting
Apply Generative AI and AI/ML techniques (e.g., LLMs, embeddings, intelligent data enrichment, automated insights, anomaly detection) to enhance risk analytics, data quality, and operational efficiency
Collaborate closely with Credit Risk, Analytics, and Business stakeholders to translate business requirements into technical architectures and solutions
Review code, enforce best practices, and provide technical guidance and mentorship to junior team members
Contribute to architectural decisions, technology selection, and long‑term platform roadmaps
Ensure solutions meet enterprise standards for governance, security, auditability, and regulatory compliance
Experience with batch processing and scheduling tools such as Autosys
Hands‑on experience with Dremio or similar data virtualization/query acceleration platforms
Experience building data solutions on cloud platforms (AWS, Azure, or GCP), including cloud storage, compute, and orchestration services
Exposure to ML/AI platforms, MLOps concepts, or AI governance frameworks in regulated environments
Experience working in financial services, risk, or regulated data domains