This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a highly skilled and motivated Senior Data Engineer/s to architect and implement scalable ETL and data storage solutions using Microsoft Fabric and the broader Azure technology stack. This role will be pivotal in building a metadata driven data lake that ingests data from over 100 structured and semi structured sources, enabling rich insights through canned reports, conversational agents, and analytics dashboards.
Job Responsibility:
Design and implement ETL pipelines using Microsoft Fabric (Dataflows, Pipelines, Lakehouse ,warehouse, sql) and Azure Data Factory
Build and maintain a metadata driven Lakehouse architecture with threaded datasets to support multiple consumption patterns
Develop agent specific data lakes and an orchestration layer for an uber agent that can query across agents to answer customer questions
Enable interactive data consumption via Power BI, Azure OpenAI, and other analytics tools
Ensure data quality, lineage, and governance across all ingestion and transformation processes
Collaborate with product teams to understand data needs and deliver scalable solutions
Optimize performance and cost across storage and compute layers
Requirements:
7-10 years of experience in data engineering with a focus on Microsoft Azure and Fabric technologies
Strong expertise in: Microsoft Fabric (Lakehouse, Dataflows Gen2, Pipelines, Notebooks)
Strong expertise in: Azure Data Factory, Azure SQL, Azure Data Lake Storage Gen2
Strong expertise in: Power BI and/or other visualization tools
Strong expertise in: Azure Functions, Logic Apps, and orchestration frameworks
Strong expertise in: SQL, Python and PySpark/Scala
Experience working with structured and semi structured data (JSON, XML, CSV, Parquet)
Proven ability to build metadata driven architectures and reusable components
Strong understanding of data modeling, data governance, and security best practices
Nice to have:
Familiarity with agent based architectures and conversational AI integration is a plus
Experience with Azure OpenAI, Copilot Studio, or similar conversational platforms
Knowledge of CI/CD pipelines and DevOps practices for data engineering
Experience in building multi tenant data platforms or domain specific lakes
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.