CrawlJobs Logo

Data Engineer with Azure Fabric

nttdata.com Logo

NTT DATA

Location Icon

Location:
Romania , Cluj

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

The Data Engineer will focus on developing scalable, high-performance data processing solutions and implementing robust ETL workflows. The role requires strong experience in Databricks and Azure, with a minimum of 3 years in data engineering. A BSc/MSc in Computer Science or related field is required. We are looking for a skilled Data Engineer with strong experience in Databricks and the Azure ecosystem to help build and optimize modern data pipelines for our client, one of the UK’s leading energy providers. The role focuses on developing scalable, high‑performance data processing solutions, implementing robust ETL workflows, and enabling advanced analytics across large‑scale operational and customer datasets. Your expertise will support client’s efforts to modernize its data landscape, enhance real‑time insights, and drive key initiatives in energy distribution, sustainability, and grid innovation within a highly regulated environment.

Job Responsibility:

  • Client Engagement & Delivery
  • Data Pipeline Development (Batch and Streaming)
  • Fabric and Azure Architectures
  • Data Modelling & Optimisation
  • Collaboration & Best Practices
  • Quality, Governance & Security
  • Client stakeholders up to Head of Data Engineering, Chief Data Architect, and Analytics leadership
  • Delivery of high-performing, scalable, and secure data pipelines aligned to client requirements
  • High client satisfaction and successful adoption of Fabric and Azure based solutions
  • Improve data engineering practices
  • Contribution to the growth of the practice through reusable assets, accelerators, and technical leadership

Requirements:

  • Minimum 3–8 years in data engineering, data warehousing, or data architecture roles
  • At least 3+ years working with Fabric
  • BSc/MSc in Computer Science, Data Engineering, or related field
  • Proven experience in data engineering and pipeline development on Fabric, Azure and cloud-native platforms
  • Familiarity with Fabric Workflows and other orchestration tools
  • Proficiency in ETL/ELT tools such as DBT, Matillion, Talend, or equivalent
  • Strong SQL and Python (or equivalent language) skills for data manipulation and automation
  • Exposure to AI/ML workloads desirable
  • Proficiency in cloud ecosystems (Specifically Azure, optionally AWS and GCP are an advantage) and infrastructure-as-code (e.g., Terraform)
  • Knowledge of data modelling methodologies (star schemas, Data Vault, Kimball, Inmon)
  • Familiarity with medallion architectures, data lakehouse principles and distributed data processing
  • Experience with version control tools (GitHub, Bitbucket) and CI/CD pipelines
  • Understanding of data governance, security, and compliance frameworks
  • Strong consulting values with ability to collaborate effectively in client-facing environments
  • Hands-on expertise across the data lifecycle: ingestion, transformation, modelling, governance, and consumption
  • Strong problem-solving, analytical, and communication skills
  • Experience leading or mentoring teams of engineers to deliver high-quality scalable data solutions
  • Fabric and Azure certifications highly desirable

Nice to have:

  • Exposure to AI/ML workloads desirable
  • Proficiency in cloud ecosystems (optionally AWS and GCP are an advantage)
  • Fabric and Azure certifications highly desirable
What we offer:
  • Smooth integration and a supportive mentor
  • Pick your working style: choose from Remote, Hybrid or Office work opportunities
  • Projects have different working hours to suit your needs
  • Sponsored certifications, trainings and top e-learning platforms
  • Private Health Insurance
  • Individual coaching sessions or joining our accredited Coaching School
  • Epic parties or themed events

Additional Information:

Job Posted:
April 16, 2026

Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Engineer with Azure Fabric

Azure Data Engineer

At LeverX, we have had the privilege of delivering over 1,500 projects for vario...
Location
Location
Uzbekistan, Georgia
Salary
Salary:
Not provided
leverx.com Logo
LeverX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience as a Data Engineer with strong expertise in Azure services (e.g., Azure Data Factory, Azure SQL Database, Azure Synapse, Microsoft Fabric, and Azure Cosmos DB
  • Advanced SQL skills, including complex query development, optimization, and troubleshooting
  • Strong knowledge of indexing, partitioning, and query execution plans to ensure scalability and performance
  • Proven expertise in database modeling, schema design, and normalization/denormalization strategies
  • Ability to design and optimize data architectures to support both transactional and analytical workloads
  • Proficiency in at least one programming language such as Python, C#, or Scala
  • Strong background in cloud-based data storage and processing (e.g., Azure Data Lake, Databricks, or equivalent) and data warehouse platforms (e.g., Snowflake)
  • English B2+
Job Responsibility
Job Responsibility
  • Design, develop, and maintain efficient and scalable data architectures and workflows
  • Build and optimize SQL-based solutions for data transformation, extraction, and loading (ETL) processes
  • Collaborate closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver effective solutions
  • Manage and optimize data storage platforms, including databases, data lakes, and data warehouses
  • Troubleshoot and resolve data-related issues, ensuring accuracy, integrity, and performance across all systems
What we offer
What we offer
  • Projects in different domains: healthcare, manufacturing, e-commerce, fintech, etc
  • Projects for every taste: Startup products, enterprise solutions, research & development initiatives, and projects at the crossroads of SAP and the latest web technologies
  • Global clients based in Europe and the US, including Fortune 500 companies
  • Employment security: We hire for our team, not just a specific project. If your project ends, we will find you a new one
  • Healthy work atmosphere: On average, our employees stay with the company for 4+ years
  • Market-based compensation and regular performance reviews
  • Internal expert communities and courses
  • Perks to support your growth and well-being
Read More
Arrow Right

Senior Azure Data Engineer

Seeking a Lead AI DevOps Engineer to oversee design and delivery of advanced AI/...
Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 6 years of professional experience in the Data & Analytics area
  • 1+ years of experience (or acting as) in the Senior Consultant or above role with a strong focus on data solutions build in Azure and Databricks/Synapse/(MS Fabric is nice to have)
  • Proven experience in Azure cloud-based infrastructure, Databricks and one of SQL implementation (e.g., Oracle, T-SQL, MySQL, etc.)
  • Proficiency in programming languages such as SQL, Python, PySpark is essential (R or Scala nice to have)
  • Very good level of communication including ability to convey information clearly and specifically to co-workers and business stakeholders
  • Working experience in the agile methodologies – supporting tools (JIRA, Azure DevOps)
  • Experience in leading and managing a team of data engineers, providing guidance, mentorship, and technical support
  • Knowledge of data management principles and best practices, including data governance, data quality, and data integration
  • Good project management skills, with the ability to prioritize tasks, manage timelines, and deliver high-quality results within designated deadlines
  • Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues
Job Responsibility
Job Responsibility
  • Act as a senior member of the Data Science & AI Competency Center, AI Engineering team, guiding delivery and coordinating workstreams
  • Develop and execute a cloud data strategy aligned with organizational goals
  • Lead data integration efforts, including ETL processes, to ensure seamless data flow
  • Implement security measures and compliance standards in cloud environments
  • Continuously monitor and optimize data solutions for cost-efficiency
  • Establish and enforce data governance and quality standards
  • Leverage Azure services, as well as tools like dbt and Databricks, for efficient data pipelines and analytics solutions
  • Work with cross-functional teams to understand requirements and provide data solutions
  • Maintain comprehensive documentation for data architecture and solutions
  • Mentor junior team members in cloud data architecture best practices
What we offer
What we offer
  • Stable employment
  • “Office as an option” model
  • Workation
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs
  • Upskilling support
Read More
Arrow Right

Azure Data Factory Engineer

Join our vibrant client company team as a Data Engineer! Harness your passion fo...
Location
Location
United States , Des Moines
Salary
Salary:
120000.00 USD / Year
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's in IT or related field
  • Adept at Python, SQL, and Microsoft Fabric tools
  • Experience with Microsoft SQL trinity - Server, Azure SQL, and SSIS
Job Responsibility
Job Responsibility
  • Dive into creative collaborations with data scientists, analysts, and stakeholders
  • Bring to life scalable data pipelines using cutting-edge tools: Azure Data Factory, PySpark, and Spark SQL
  • Be the data whisperer integrating diverse data streams into a powerful, unified platform
  • Showcase your governance skills in managing cutting-edge data storage solutions
  • Keep pulse on the heartbeat of industry trends, injecting fresh tech ideas
What we offer
What we offer
  • Competitive salary
  • Exceptional benefits
  • Free online training
  • Medical, vision, dental, and life and disability insurance
  • 401(k) plan
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

At ANS, the Senior Data Engineer plays a key role in delivering robust, scalable...
Location
Location
United Kingdom , Manchester
Salary
Salary:
Not provided
ans.co.uk Logo
ANS Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience in building and optimising pipelines in Azure Data Factory, Synapse, or Fabric
  • Strong knowledge of Python and SQL
  • Experience in using metadata frameworks in data engineering
  • Experience in best practice data engineering principles including CI/CD via Azure DevOps or Github
  • Understanding of Azure networking and security in relation to the data platform
  • Experience of data governance and regulation, including GDPR, principle of least privilege, classification etc.
  • Experience of lakehouse architecture, data warehousing principles, and data modelling
  • Familiarity with Microsoft Purview in a data platform context
  • Base knowledge of Azure foundry
Job Responsibility
Job Responsibility
  • Build and optimise data pipelines, notebooks, and data flows in Microsoft Fabric and Synapse Analytics, connecting to a variety of on-premises and cloud based data sources
  • Support Data Architects and Cloud Engineers by implementing solutions based on provided designs and offering feedback where needed
  • Collaborate across disciplines to ensure high-quality delivery of data solutions, including working with presales, managed services, and customer teams
  • Mentor Data engineers and support their development through guidance and task distribution
  • Ensure best practice adherence in engineering processes, including CI/CD via Azure DevOps and secure data handling (e.g. Key vault, private endpoints)
  • Contribute to Agile delivery by participating in standups, user story creation, and sprint planning
  • Document implemented solutions clearly and accurately for internal and customer use
  • Troubleshoot and resolve issues across subscriptions and environments
  • Work closely with the Project Manager (where applicable) to align on delivery timelines, report progress, and manage risks, while also acting as a key point of contact for customer SMEs and engineers to support collaboration and clarify technical requirements
  • Engage in continuous learning through certifications (e.g. DP-600 and/or DP700, AI-900, AI102, etc.) and development days
What we offer
What we offer
  • 25 days’ holiday, plus you can buy up to 5 more days
  • Birthday off
  • An extra celebration day
  • 5 days’ additional holiday in the year you get married
  • 5 volunteer days
  • Private health insurance
  • Pension contribution match and 4 x life assurance
  • Flexible working and work from anywhere for up to 30 days per year (some exceptions)
  • Maternity: 16 weeks’ full pay, Paternity: 3 weeks’ full pay, Adoption: 16 weeks’ full pay
  • Company social events
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are currently looking for a Data Engineer to join our client’s forward-thinki...
Location
Location
United Kingdom , London
Salary
Salary:
70000.00 - 75000.00 GBP / Year
dataidols.com Logo
Data Idols
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience leading data engineering initiatives in a cloud environment
  • Expertise in Azure Synapse Analytics (SQL pools, Spark, pipelines, serverless SQL)
  • Hands-on experience with Microsoft Fabric (OneLake, Lakehouse, Data Engineering/Data Science workloads)
  • Strong SQL skills (T-SQL, Synapse SQL, Lakehouse SQL)
  • Experience with Python and/or Spark for big data processing
  • Knowledge of cloud data architecture (ADLS Gen2, Delta Lake, Parquet)
  • Understanding of data governance, security, and compliance standards
Job Responsibility
Job Responsibility
  • Lead and mentor internal and external data engineering teams to deliver best-in-class cloud solutions
  • Design, build, and optimise enterprise data platforms using Azure Synapse, Microsoft Fabric, OneLake, and lakehouse architectures
  • Modernise traditional ETL/ELT processes and transition workloads to cloud-native tools
  • Develop scalable data pipelines, Spark-based processing, and dataflows for unified analytics
  • Implement governance, security, and compliance standards across all cloud data environments
  • Collaborate with stakeholders to support high-value analytics and Power BI initiatives
  • Produce documentation, technical designs, and support materials to enable continuous improvement
What we offer
What we offer
  • Competitive salary
  • Strong benefits package
  • Hybrid working model
  • Opportunity to own and shape a modern cloud data ecosystem
  • Continuous training and professional development in Microsoft Fabric, Azure, and modern data engineering
  • Clear path for career progression within a growing data organisation
  • Supportive, innovative, and people-focused working environment
  • Fulltime
Read More
Arrow Right

Data Engineer

At ANS, the Data Engineer plays a vital role in enabling data-driven decision-ma...
Location
Location
United Kingdom , Manchester
Salary
Salary:
Not provided
ans.co.uk Logo
ANS Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience in building and optimising pipelines in Azure Data Factory, Synapse, or Fabric
  • Knowledge of python and SQL
  • Experience in using metadata frameworks in data engineering
  • Experience in best practice data engineering principles including CI/CD via Azure DevOps or Github
  • Base knowledge understanding of Azure networking and security in relation to the data platform
  • Experience of data governance and regulation, including GDPR, principle of least privilege, classification etc.
  • Experience of Lakehouse architecture, data warehousing principles, and data modelling
Job Responsibility
Job Responsibility
  • Deliver high-quality data solutions by building and optimising data pipelines, notebooks, and data flows in Microsoft Fabric and Synapse Analytics, connecting to a variety of on-premises and cloud-based data sources
  • Work closely with Data Architects and Senior Data Engineers to implement technical designs and contribute to solution development
  • Collaborate with customer-side data engineers to ensure smooth integration and alignment with business requirements
  • Focus on task execution and delivery, ensuring timelines and quality standards are met
  • Follow engineering best practices including CI/CD via Azure DevOps, secure data handling using Key Vault and private endpoints, and maintain code quality
  • Participate in Agile ceremonies such as standups, sprint planning, and user story refinement
  • Document solutions clearly for internal use and knowledge sharing
  • Troubleshoot and resolve technical issues across environments and subscriptions
  • Engage in continuous learning through certifications (e.g. DP-600 and/or DP700, AI-900, AI-102, etc.) and development days
  • Contribute to the Data Engineer Guild by sharing knowledge, participating in discussions, and helping shape engineering standards and practices
What we offer
What we offer
  • 25 days’ holiday, plus you can buy up to 5 more days
  • Birthday off
  • Extra celebration day
  • 5 days’ additional holiday in the year you get married
  • 5 volunteer days
  • Private health insurance
  • Pension contribution match and 4 x life assurance
  • Flexible working and work from anywhere for up to 30 days per year (some exceptions)
  • Maternity: 16 weeks’ full pay, Paternity: 3 weeks’ full pay, Adoption: 16 weeks’ full pay
  • Company social events
  • Fulltime
Read More
Arrow Right

Data Engineering & Analytics Lead

Premium Health is seeking a highly skilled, hands-on Data Engineering & Analytic...
Location
Location
United States , Brooklyn
Salary
Salary:
Not provided
premiumhealth.org Logo
Premium Health
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred
  • Proven track record and progressively responsible experience in data engineering, data architecture, or related technical roles
  • healthcare experience preferred
  • Strong knowledge of data engineering principles, data integration, ETL processes, and semantic mapping techniques and best practices
  • Experience implementing data quality management processes, data governance frameworks, cataloging, and master data management concepts
  • Familiarity with healthcare data standards (e.g., HL7, FHIR, etc), health information management principles, and regulatory requirements (e.g., HIPAA)
  • Understanding of healthcare data, including clinical, operational, and financial data models, preferred
  • Advanced proficiency in SQL, data modeling, database design, optimization, and performance tuning
  • Experience designing and integrating data from disparate systems into harmonized data models or semantic layers
  • Hands-on experience with modern cloud-based data platforms (e.g Azure, AWS, GCP)
Job Responsibility
Job Responsibility
  • Collaborate with the CDIO and Director of Technology to define a clear data vision aligned with the organization's goals and execute the enterprise data roadmap
  • Serve as a thought leader for data engineering and analytics, guiding the evolution of our data ecosystem and championing data-driven decision-making across the organization
  • Build and mentor a small data team, providing technical direction and performance feedback, fostering best practices and continuous learning, while remaining a hands-on implementor
  • Define and implement best practices, standards, and processes for data engineering, analytics, and data management across the organization
  • Design, implement, and maintain a scalable, reliable, and high-performing modern data infrastructure, aligned with the organizational needs and industry best practices
  • Architect and maintain data lake/lakehouse, warehouse, and related platform components to support analytics, reporting, and operational use cases
  • Establish and enforce data architecture standards, governance models, naming conventions ,and documentation
  • Develop, optimize, and maintain scalable ETL/ELT pipelines and data workflows to collect, transform, normalize, and integrate data from diverse systems
  • Implement robust data quality processes, validation, monitoring, and error-handling frameworks
  • Ensure data is accurate, timely, secure, and ready for self-service analytics and downstream applications
What we offer
What we offer
  • Paid Time Off, Medical, Dental and Vision plans, Retirement plans
  • Public Service Loan Forgiveness (PSLF)
  • Fulltime
Read More
Arrow Right

Data Engineer

Factors Group is a leading wellness company committed to leveraging data-driven ...
Location
Location
Canada , Coquitlam
Salary
Salary:
100000.00 - 120000.00 CAD / Year
ca.naturalfactors.com Logo
Natural Factors
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or Master's degree in Computer Science, or a related field
  • 5+ years of experience in data engineering or a similar role, preferably in a fast-paced, data-intensive environment
  • Proficiency in programming languages such as Python, with experience in building data pipelines using frameworks like Apache Spark
  • Strong SQL skills
  • Hands-on experience with cloud platforms such as Azure or Fabric, and related services like Databricks, Snowflake, Azure Data Factory, Fabric Data Factory
  • Familiarity with containerization and orchestration tools like Docker and Kubernetes
  • Excellent problem-solving skills and the ability to work independently and collaboratively in a dynamic team environment
  • Strong communication and interpersonal skills, with the ability to effectively convey technical concepts to non-technical stakeholders
Job Responsibility
Job Responsibility
  • Design, build, and maintain robust, scalable data pipelines and ETL processes to ingest, transform, and store structured and unstructured data from various sources
  • Optimize data workflows for performance, reliability, and cost-effectiveness, utilizing best practices in data engineering and cloud computing
  • Collaborate with Data Analysts to understand data requirements and implement solutions to enable advanced analytics and machine learning
  • Develop and maintain data models, schemas, and metadata to ensure data integrity, consistency, and quality
  • Implement monitoring, logging, and alerting systems to proactively identify and address data pipeline issues and performance bottlenecks
  • Stay current with emerging technologies and trends in data engineering, recommending and implementing improvements to enhance our data infrastructure and processes
  • Provide technical guidance and mentorship to junior team members, fostering a culture of collaboration, learning, and innovation
What we offer
What we offer
  • Great healthcare benefits (including health and personal spending accounts)
  • Company provided RRSP
  • Vacation and wellness days - 5 weeks in total
  • Employee appreciation events and wellness sessions
  • $300 per year to spend on company products plus discounts on our products
  • Free onsite parking and bus pass reimbursement
  • Discounts on local and sustainable companies
  • Tuition reimbursement
  • Fulltime
Read More
Arrow Right