CrawlJobs Logo

Azure Data Factory Developer

think360.ai Logo

Think360.ai

Location Icon

Location:

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

This role will be responsible for creating Data orchestration with Azure Data Factory Pipelines & Dataflows. Key role is to understand the business requirements and implement the requirements using Azure Data Factory.

Job Responsibility:

  • Understand business requirement and actively provide inputs from Data perspective
  • Understand the underlying data and flow of data
  • Build simple to complex pipelines & dataflows
  • Work with other Azure stack modules like Azure Data Lakes, SQL DW
  • Should be able to implement modules that has security and authorization frameworks
  • Recognize and adapt to the changes in processes as the project evolves in size and function

Requirements:

  • Bachelor or master’s degree in computer science or Data engineering
  • At least 7-8 years of software development experience
  • 1 year of experience on Azure Data factory
  • Experience working with at least 1 project on Azure Data Factory
  • Expert level knowledge on Azure Data Factory
  • Expert level knowledge of SQL DB & Datawarehouse
  • Should know at least one programming language
  • Should be able to analyze and understand complex data
  • Knowledge of Azure data lake is required
  • Knowledge of other Azure Services like Analysis Service, SQL Databases will be an added advantage
  • Excellent interpersonal/communication skills (both oral/written) with the ability to communicate at various levels with clarity & precision

Nice to have:

Knowledge of other Azure Services like Analysis Service, SQL Databases will be an added advantage

Additional Information:

Job Posted:
January 02, 2026

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Azure Data Factory Developer

Data Engineer (Azure)

Fyld is a Portuguese consulting company specializing in IT services. We bring hi...
Location
Location
Portugal , Lisboa
Salary
Salary:
Not provided
https://www.fyld.pt Logo
Fyld
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Software Engineering, Data Engineering, or related
  • Relevant certifications in Azure, such as Microsoft Certified: Azure Data Engineer Associate or Microsoft Certified: Azure Solutions Architect Expert
  • Hands-on experience with Azure services, especially those related to data engineering and analytics, such as Azure SQL Database, Azure Data Lake, Azure Synapse Analytics, Azure Databricks, Azure Data Factory, among others
  • Familiarity with Azure storage and compute services, including Azure Blob Storage, Azure SQL Data Warehouse, Azure HDInsight, and Azure Functions
  • Proficiency in programming languages such as Python, SQL, or C# for developing data pipelines, data processing, and automation
  • Knowledge of data manipulation and transformation techniques using tools like Azure Databricks or Apache Spark
  • Experience in data modeling, data cleansing, and data transformation for analytics and reporting purposes
  • Understanding of data architecture principles and best practices, including data lake architectures, data warehousing, and ETL/ELT processes
  • Knowledge of security and compliance features offered by Azure, including data encryption, role-based access control (RBAC), and Azure Security Center
  • Excellent communication skills, both verbal and written, to collaborate effectively with technical and non-technical teams
  • Fulltime
Read More
Arrow Right

Integration Developer / Data Engineer

We are looking for an experienced Integration Developer / Data Engineer to desig...
Location
Location
Poland , Wroclaw
Salary
Salary:
Not provided
eviden.com Logo
Eviden
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience with MS SQL (T-SQL, query optimization)
  • Knowledge of Microsoft Dataverse and Dynamics 365 Sales CRM
  • Hands-on experience with Azure Data Factory
  • Strong understanding of ETL process design
  • English – professional working proficiency
Job Responsibility
Job Responsibility
  • Design and implement integration processes (MS SQL → Dataverse / D365 Sales CRM)
  • Develop and optimize pipelines in Azure Data Factory
  • Analyze business and technical requirements
  • Monitor and maintain ETL processes
  • Collaborate with development and analytics teams
  • Parttime
Read More
Arrow Right

Azure Data Engineer

At LeverX, we have had the privilege of delivering over 1,500 projects for vario...
Location
Location
Uzbekistan, Georgia
Salary
Salary:
Not provided
leverx.com Logo
LeverX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience as a Data Engineer with strong expertise in Azure services (e.g., Azure Data Factory, Azure SQL Database, Azure Synapse, Microsoft Fabric, and Azure Cosmos DB
  • Advanced SQL skills, including complex query development, optimization, and troubleshooting
  • Strong knowledge of indexing, partitioning, and query execution plans to ensure scalability and performance
  • Proven expertise in database modeling, schema design, and normalization/denormalization strategies
  • Ability to design and optimize data architectures to support both transactional and analytical workloads
  • Proficiency in at least one programming language such as Python, C#, or Scala
  • Strong background in cloud-based data storage and processing (e.g., Azure Data Lake, Databricks, or equivalent) and data warehouse platforms (e.g., Snowflake)
  • English B2+
Job Responsibility
Job Responsibility
  • Design, develop, and maintain efficient and scalable data architectures and workflows
  • Build and optimize SQL-based solutions for data transformation, extraction, and loading (ETL) processes
  • Collaborate closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver effective solutions
  • Manage and optimize data storage platforms, including databases, data lakes, and data warehouses
  • Troubleshoot and resolve data-related issues, ensuring accuracy, integrity, and performance across all systems
What we offer
What we offer
  • Projects in different domains: healthcare, manufacturing, e-commerce, fintech, etc
  • Projects for every taste: Startup products, enterprise solutions, research & development initiatives, and projects at the crossroads of SAP and the latest web technologies
  • Global clients based in Europe and the US, including Fortune 500 companies
  • Employment security: We hire for our team, not just a specific project. If your project ends, we will find you a new one
  • Healthy work atmosphere: On average, our employees stay with the company for 4+ years
  • Market-based compensation and regular performance reviews
  • Internal expert communities and courses
  • Perks to support your growth and well-being
Read More
Arrow Right

Data Science Developer

Data Science Developer - Deliverables. The candidates will perform Data Science ...
Location
Location
Canada , Toronto
Salary
Salary:
Not provided
nspritsolutions.com Logo
NSPR IT Services & Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience with Azure Storage, Azure Data Lake, Azure Databricks Lakehouse, and Azure Synapse structures
  • Experience with Python, Databricks and Azure Data Factory
  • Power BI reports and dashboards
Job Responsibility
Job Responsibility
  • Creating, enhancing, maintaining, and supporting structures for storage of data in formats that are suitable for consumption in analytics solutions
  • Automation of data pipelines used to ingest, prepare, transform, and model data for use in analytics products
  • Creating, enhancing, maintaining, and supporting dashboards and reports
  • Creating, enhancing, maintaining, and supporting analytics environments and implementing new technology to improve performance, simplify architecture patterns, and reduce cloud hosting costs
  • Knowledge transfer sessions and documentation for technical staff related to architecting, designing, and implementing continuous improvement enhancements to analytics solutions
Read More
Arrow Right

Azure Data Engineer

Join AlgebraIT, a premier IT consulting firm in Austin, Texas! We are looking fo...
Location
Location
United States , Austin
Salary
Salary:
Not provided
algebrait.com Logo
AlgebraIT
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 3 years of relevant experience
  • Bachelor’s degree in Computer Science or a related field
  • 3+ years of experience in data engineering
  • Proficiency in Azure services (e.g., Data Lake, Databricks, Synapse)
  • Strong knowledge of Python, SQL, and data modeling
  • Excellent problem-solving skills and attention to detail
Job Responsibility
Job Responsibility
  • Design, develop, and maintain data pipelines using Azure Data Factory and other Azure services
  • Collaborate with data scientists and analysts to support data initiatives
  • Ensure the reliability, availability, and performance of the data infrastructure
  • Optimize data flow and pipeline architecture
  • Implement best practices for data security and governance
  • Monitor data pipelines and troubleshoot issues as needed
  • Develop and maintain data integration solutions
  • Document data processes and workflows
  • Stay up-to-date with Azure technology advancements
  • Provide support for data-related technical issues
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are seeking a highly skilled and motivated Senior Data Engineer/s to architec...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
techmahindra.com Logo
Tech Mahindra
Expiration Date
January 30, 2026
Flip Icon
Requirements
Requirements
  • 7-10 years of experience in data engineering with a focus on Microsoft Azure and Fabric technologies
  • Strong expertise in: Microsoft Fabric (Lakehouse, Dataflows Gen2, Pipelines, Notebooks)
  • Strong expertise in: Azure Data Factory, Azure SQL, Azure Data Lake Storage Gen2
  • Strong expertise in: Power BI and/or other visualization tools
  • Strong expertise in: Azure Functions, Logic Apps, and orchestration frameworks
  • Strong expertise in: SQL, Python and PySpark/Scala
  • Experience working with structured and semi structured data (JSON, XML, CSV, Parquet)
  • Proven ability to build metadata driven architectures and reusable components
  • Strong understanding of data modeling, data governance, and security best practices
Job Responsibility
Job Responsibility
  • Design and implement ETL pipelines using Microsoft Fabric (Dataflows, Pipelines, Lakehouse ,warehouse, sql) and Azure Data Factory
  • Build and maintain a metadata driven Lakehouse architecture with threaded datasets to support multiple consumption patterns
  • Develop agent specific data lakes and an orchestration layer for an uber agent that can query across agents to answer customer questions
  • Enable interactive data consumption via Power BI, Azure OpenAI, and other analytics tools
  • Ensure data quality, lineage, and governance across all ingestion and transformation processes
  • Collaborate with product teams to understand data needs and deliver scalable solutions
  • Optimize performance and cost across storage and compute layers
Read More
Arrow Right

Azure Data Engineer

As an Azure Data Engineer, you will be expected to design, implement, and manage...
Location
Location
India , Hyderabad / Bangalore
Salary
Salary:
Not provided
quadranttechnologies.com Logo
Quadrant Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Information Technology, or a related field
  • Hands-on experience in writing complex T-SQL queries and stored procedures
  • Good experience in data integration and database development
  • Proficiency in T-SQL and Spark SQL/PySpark. (Synapse/ Databricks)
  • Extensive experience with Azure Data Factory
  • Excellent problem-solving skills and attention to detail
  • 5 – 8 years experience
  • Proven track record of writing complex SQL stored procedures with implementing OTLP database solutions (using Microsoft SQL Server)
  • Experience with Azure Synapse / PySpark / Azure Databricks for big data processing
  • Expertise in T-SQL, Dynamic SQL, Spark SQL, and ability to write complex stored procedures
Job Responsibility
Job Responsibility
  • Collaborate with cross-functional teams to gather, analyze, and document business requirements for data integration projects
  • Write complex stored procedures to support data transformation and to implement business validation logic
  • Develop and maintain robust data pipelines using Azure Data Factory ensuring seamless data flow between systems
  • Work closely with the team to ensure data quality, integrity, and accuracy across all systems
  • Contribute to the enhancement and optimization of OLTP systems
  • Fulltime
Read More
Arrow Right

Azure Data Engineer

Experience: 3-6+ Years Location: Noida/Gurugram/Remote Skills: PYTHON, PYSPARK...
Location
Location
India , Noida; Gurugram
Salary
Salary:
Not provided
nexgentechsolutions.com Logo
NexGen Tech Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3-6+ Years experience
  • PYTHON
  • PYSPARK
  • SQL
  • AZURE DATA FACTORY
  • DATABRICKS
  • DATA LAKE
  • AZURE FUNCTION
  • DATA PIPELINE
Job Responsibility
Job Responsibility
  • Design and engineer the cloud/big data solutions, develop a modern data analytics lake
  • Develop & maintain data pipelines for batch & stream processing using modern cloud or open source ETL/ELT tools
  • Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT
  • Implement continuous integration, continuous deployment, DevOps practice
  • Create, document, and manage data guidelines, governance, and lineage metrics
  • Technically lead, design and develop distributed, high-throughput, low-latency, highly available data processing and data systems
  • Build monitoring tools for server-side components
  • work cohesively in India-wide distributed team
  • Identify, design, and implement internal process improvements and tools to automate data processing and ensure data integrity while meeting data security standards
  • Build tools for better discovery and consumption of data for various consumption models in the organization – DataMarts, Warehouses, APIs, Ad Hoc Data explorations
  • Fulltime
Read More
Arrow Right