This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a Data Engineer with experience in Microsoft Fabric and Power BI to design, build, and maintain modern data solutions. This role involves creating automated data pipelines, developing data models, and supporting reporting and analytics across domains such as Student, Finance, HR, Learning, and Advising.
Job Responsibility:
Design and develop Fabric Pipelines for automated ingestion from APIs, files into OneLake
Implement medallion architecture (Bronze → Silver → Gold) using Fabric Lakehouses and Warehouses, applying Delta Lake best practices, partitioning, and incremental load patterns
Build dimensional (star schema) models in the Gold layer aligned to Student, Finance, HR, Learning, and Advising domains
Develop and maintain the master orchestration framework, including dependency management, retry logic, error handling, and scheduled execution windows
Implement data quality checks, validation rules, and reconciliation logic across layers
support certification of enterprise KPIs
Configure Row-Level Security (RLS) and Object-Level Security (OLS) in collaboration with the BI and security teams
Integrate with Microsoft Purview for metadata capture, lineage, classification, and business glossary alignment
Support deployment pipelines across Dev → Test → Prod Fabric workspaces, including CI/CD via Git integration
Collaborate with NWTC technical data stewards during sprint ceremonies, UAT, and hypercare
Produce technical documentation: source-to-target mappings, pipeline runbooks, and operational handover materials
Requirements:
BE/BS/MTech/MS or equivalent work experience
5+ Years
2+ years of experience working with Microsoft Fabric, Azure Synapse, or Azure Data Factory
Practical experience building solutions with OneLake, Lakehouse, Delta Lake, and Fabric Pipelines or Dataflows Gen2
Strong proficiency in SQL and PySpark / Spark SQL
Proven experience implementing medallion architecture and dimensional data modeling