This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking an experienced Python + Data Engineer who can design, develop, and deliver scalable data solutions within the banking and financial services domain. The ideal candidate should be a hands‑on developer with strong expertise in Python-based data workflows, cloud technologies, and distributed data processing frameworks. This role requires deep technical skills, strong analytical capabilities, and the ability to work end-to-end on complex data engineering projects.
Job Responsibility:
Develop and maintain scalable data workflows and automation using Python
Build and optimize large‑scale data processing pipelines using PySpark
Perform detailed data analysis and validation using Pandas
Work with Delta Lake for handling structured and semi‑structured datasets
Write efficient SQL queries and perform operations on Delta tables
Leverage Azure Cloud compute and storage services for data engineering workloads
Use ADF or Airflow to orchestrate data pipelines and workflows
Collaborate with cross-functional teams to design end‑to‑end data solutions
Ensure data quality, performance, and reliability across all pipelines
Troubleshoot, optimize, and enhance existing data workflows
Requirements:
Python (3–5 years): Strong experience in data workflows, automation, data manipulation
PySpark: Hands-on experience in distributed large-scale data processing
Pandas: Strong capabilities in data analysis and validation
Delta Lake: Experience managing large structured and semi-structured datasets
SQL: Strong querying skills and experience working with Delta tables
Azure Cloud: Compute, storage, and basic cloud data services
Orchestration Tools: Experience with Azure Data Factory (ADF) or Apache Airflow
Nice to have:
Experience with Azure Databricks
Knowledge of data modeling and warehouse concepts (dimensional modeling, star schema)
Understanding of data governance and data quality frameworks
Experience with Git and CI/CD for data pipeline deployment
Exposure to streaming technologies like Kafka or Event Hub
Understanding of DevOps practices for data engineering
Experience in BFSI domain data processes (optional but beneficial)