This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a highly skilled Databricks Engineer – Azure Fabric with 6–8 years of experience in data engineering to design, build, and maintain scalable data platforms on Microsoft Fabric and Azure. The ideal candidate will have strong hands‑on experience with Python, SQL, Microsoft Fabric (OneLake, Lakehouse, Data Factory), and Delta Lake, along with the ownership mindset to deliver regulatory‑grade, enterprise data solutions. This role involves close collaboration with global engineering, data, compliance, and business teams and supports advanced analytics and AI‑enabled data products.
Job Responsibility:
Design, build, and maintain scalable, distributed, and fault‑tolerant data pipelines on Microsoft Fabric
Develop lakehouse architectures using OneLake and Delta Lake, including incremental merge workflows and Change Data Feed
Build pipelines to ingest, normalize, transform, and publish large volumes of financial market data
Design and implement bitemporal data models (valid‑time and system‑time) for regulatory‑grade time‑series datasets
Participate in cross‑functional discussions with engineering, compliance, research, and business stakeholders globally
Build and maintain testing frameworks (unit, regression, UAT) for data pipelines and transformations
Own end‑to‑end delivery of solutions, including ingestion pipelines, QA processes, correction handling, and audit trails
Collaborate on shared platform services and reusable components instead of siloed implementations
Apply business understanding of financial reference data (equities and other asset classes)
Support AI enablement use cases such as AI‑assisted ingestion, anomaly detection, and semantic search over lakehouse data
Requirements:
6–8 years of experience in data engineering
Strong proficiency in Python for data pipelines, transformations, and automation
Advanced SQL skills including window functions, partitioning, and time‑series query patterns
Hands‑on experience with Microsoft Fabric: OneLake, Fabric Data Factory pipelines, Fabric Lakehouse, Fabric Warehouse (SQL endpoint)
Strong working knowledge of Delta Lake: Table creation and management, Incremental merges, Z‑ordering, Change Data Feed (CDF)
Experience using AI‑assisted development tools (e.g., GitHub Copilot, Cursor)
Proficient with Git for code versioning, branching strategies, and pull‑request workflows
Experience working with REST APIs for data ingestion and system integration
Familiarity with Azure services such as Azure Data Factory, Azure SQL, Azure Key Vault, and RBAC
Strong ownership, problem‑solving, and collaboration skills
Nice to have:
Experience with pandas, PySpark, or similar data processing libraries
Knowledge of columnar storage and time‑series analytics (e.g., ClickHouse or equivalent)
Familiarity with Microsoft Purview for data lineage, cataloging, and data classification
Understanding of bitemporal modeling for financial and regulatory datasets
Knowledge of financial reference data: equities, identifiers, corporate actions, index data
Exposure to CI/CD pipelines and automated data platform deployments