This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a Data Engineer Tech Lead to join our capital markets data engineering teams, focusing on designing, building, and maintaining scalable data infrastructure on the Databricks platform. This role requires deep technical expertise in modern data stack technologies and the ability to work with complex financial data systems.
Job Responsibility:
Design and implement robust, scalable data pipelines using Databricks, Apache Spark, and Delta Lake as well as BigQuery
Use SQL and Python to develop, scale, and optimize advanced data pipelines
Build and optimize ETL/ELT processes for capital markets data
Develop real-time and batch processing solutions to support trading and risk management operations
Implement data quality monitoring, validation, and alerting systems
Configure and optimize Databricks workspaces, clusters, and job scheduling
Work in a Multi-cloud environment including Azure, GCP and AWS
Implement security best practices including access controls, encryption, and audit logging
Build integrations with market data vendors, trading systems, and risk management platforms
Establish monitoring and performance tuning for data pipeline health and efficiency
Collaborate with various stakeholders across the company and support business insight requests
Work closely with quantitative researchers, risk analysts, and product teams to understand data requirements
Collaborate with other data engineering teams and infrastructure groups
Provide technical guidance to junior engineers and contribute to code reviews
Participate in architecture discussions and technology selection decisions
Requirements:
Engineering experience with strong expertise in Apache Spark and distributed computing
Strong programming skills in Python including data handling libraries (pandas, numpy, etc.), and data modeling
Proficient in Databricks platform and Delta Lake for data lake architecture
Proficient in writing complex SQL queries
Experience with cloud-based databases (Google BigQuery \ Snowflake)
Advanced SQL skills and experience with both relational and NoSQL databases
Bachelor's degree in Computer Science, Engineering or related field
Experience with integration of multiple data sources and working with various databases technologies
Nice to have:
Experience with Azure cloud platform and associated data services (Data Factory, Event Hubs, Storage)
Experience with EKS / AKS
Knowledge of data streaming platforms (Kafka, Azure Event Hubs) for real-time processing