CrawlJobs Logo

Adf Data Engineer

https://www.randstad.com Logo

Randstad

Location Icon

Location:
India , Punewadi

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Role - Data Engineer Skill- Snowflake, Apache Airflow, DBT, Spark/Pyspark SQL and Data engineering concepts Exp - 4-6 years JD - Design, build, test and operationalize scalable data pipelines and cloud-native data platforms leveraging Snowflake, Apache Airflow, dbt, and Spark/PySpark -Build scalable data processing frameworks using Spark / PySpark for large-volume structured datasets -Design, implement, and optimize cloud data warehouse solutions on Snowflake -Develop modular, testable transformations using dbt, implementing reusable models, snapshots, and data tests. Perform robust testing across multiple layers of the data process pipeline -Enable CI/CD for data pipelines integrating Git and deployment workflows Duration - 3 months

Job Responsibility:

  • Design, build, test and operationalize scalable data pipelines and cloud-native data platforms leveraging Snowflake, Apache Airflow, dbt, and Spark/PySpark
  • Build scalable data processing frameworks using Spark / PySpark for large-volume structured datasets
  • Design, implement, and optimize cloud data warehouse solutions on Snowflake
  • Develop modular, testable transformations using dbt, implementing reusable models, snapshots, and data tests
  • Perform robust testing across multiple layers of the data process pipeline
  • Enable CI/CD for data pipelines integrating Git and deployment workflows

Requirements:

  • Snowflake
  • Apache Airflow
  • DBT
  • Spark/Pyspark SQL
  • Data engineering concepts

Additional Information:

Job Posted:
April 29, 2026

Expiration:
May 25, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Adf Data Engineer

Lead Data Engineer

Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader i...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, Engineering, or related field
  • 7-9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark
  • Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse
  • Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices
  • Solid grasp of data governance, metadata tagging, and role-based access control
  • Proven ability to mentor and grow engineers in a matrixed or global environment
  • Strong verbal and written communication skills, with the ability to operate cross-functionally
  • Certifications in Azure, Databricks, or Snowflake are a plus
  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms
  • Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines)
  • Architect data models and re-usable layers consumed by multiple downstream pods
  • Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks
  • Mentoring and coaching team
  • Partner with product and platform leaders to ensure engineering consistency and delivery excellence
  • Act as an L3 escalation point for operational data issues impacting foundational pipelines
  • Own engineering best practices, sprint planning, and quality across the Enablement pod
  • Contribute to platform discussions and architectural decisions across regions
  • Fulltime
Read More
Arrow Right

Data Engineering Architect

Data engineering involves the development of solutions for the collection, trans...
Location
Location
India
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years’ experience in the Data & Analytics area
  • 4+ years’ experience into Data Engineering Architecture
  • Proficiency in Python, PySpark, SQL
  • Strong expertise in Azure cloud services such as: ADF, databricks, pyspark, Logic app
  • Strong understanding of data engineering concepts, including data modeling, ETL processes, data pipelines, and data governance
  • Expertise in designing and implementing scalable and efficient data processing frameworks
  • In-depth knowledge of various data technologies and tools, such as relational databases, NoSQL databases, data lakes, data warehouses, and big data frameworks (e.g., Hadoop, Spark)
  • Experience in selecting and integrating appropriate technologies to meet business requirements and long-term data strategy
  • Ability to work closely with stakeholders to understand business needs and translate them into data engineering solutions
  • Strong analytical and problem-solving skills, with the ability to identify and address complex data engineering challenges
Job Responsibility
Job Responsibility
  • Collaborate with stakeholders to understand business requirements and translate them into data engineering solutions
  • Design and oversee the overall data architecture and infrastructure, ensuring scalability, performance, security, maintainability, and adherence to industry best practices
  • Define data models and data schemas to meet business needs, considering factors such as data volume, velocity, variety, and veracity
  • Select and integrate appropriate data technologies and tools, such as databases, data lakes, data warehouses, and big data frameworks, to support data processing and analysis
  • Create scalable and efficient data processing frameworks, including ETL (Extract, Transform, Load) processes, data pipelines, and data integration solutions
  • Ensure that data engineering solutions align with the organization's long-term data strategy and goals
  • Evaluate and recommend data governance strategies and practices, including data privacy, security, and compliance measures
  • Collaborate with data scientists, analysts, and other stakeholders to define data requirements and enable effective data analysis and reporting
  • Provide technical guidance and expertise to data engineering teams, promoting best practices and ensuring high-quality deliverables
  • Support to team throughout the implementation process, answering questions and addressing issues as they arise
What we offer
What we offer
  • Stable employment
  • “Office as an option” model
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs
  • Upskilling support
  • Internal Gallup Certified Strengths Coach to support your growth
  • Grow as we grow as a company
Read More
Arrow Right

Azure Data Engineer

As an Azure Data Engineer, you will design and maintain scalable data pipelines ...
Location
Location
Salary
Salary:
Not provided
aciinfotech.com Logo
ACI Infotech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3–5 years of experience as a Data Engineer with Azure ecosystem
  • Strong skills in SQL, Databricks, and Python
  • Hands-on experience with Azure Data Factory (ADF)
  • Power BI experience preferred
  • Familiarity with Delta Lake and/or Azure Synapse is a plus
Job Responsibility
Job Responsibility
  • Develop, manage, and optimize ADF pipelines
  • Design and implement Databricks notebooks for ETL processes
  • Write and optimize SQL scripts for large-scale datasets
  • Collaborate with BI teams to support dashboard and reporting solutions
  • Ensure data quality, security, and compliance with governance policies
  • Fulltime
Read More
Arrow Right

Sr. Data Engineer

We are looking for a Sr. Data Engineer to join our growing Quality Engineering t...
Location
Location
Salary
Salary:
Not provided
dataideology.com Logo
Data Ideology
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Information Systems, or a related field (or equivalent experience)
  • 5+ years of experience in data engineering, data warehousing, or data architecture
  • Expert-level experience with Snowflake, including data modeling, performance tuning, security, and migration from legacy platforms
  • Hands-on experience with Azure Data Factory (ADF) for building, orchestrating, and optimizing data pipelines
  • Strong experience with Informatica (PowerCenter and/or IICS) for ETL/ELT development, workflow management, and performance optimization
  • Deep knowledge of data modeling techniques (dimensional, tabular, and modern cloud-native patterns)
  • Proven ability to translate business requirements into scalable, high-performance data solutions
  • Experience designing and supporting end-to-end data pipelines across cloud and hybrid architectures
  • Strong proficiency in SQL and experience optimizing large-scale analytic workloads
  • Experience working within SDLC frameworks, CI/CD practices, and version control
Job Responsibility
Job Responsibility
  • Ability to collect and understand business requirements and translate those requirements into data models, integration strategies, and implementation plans
  • Lead modernization and migration initiatives to move clients from legacy systems into Snowflake, ensuring functionality, performance and data integrity
  • Ability to work within the SDLC framework in multiple environments and understand the complexities and dependencies of the data warehouse
  • Optimize and troubleshoot ETL/ELT workflows, applying best practices for scheduling, orchestration, and performance tuning
  • Maintain documentation, architecture diagrams, and migration plans to support knowledge transfer and project tracking
What we offer
What we offer
  • PTO Policy
  • Eligibility for Health Benefits
  • Retirement Plan
  • Work from Home
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

Lead Data Engineer to serve as both a technical leader and people coach for our ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, Engineering, or related field
  • 8-10 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark
  • Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse
  • Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices
  • Solid grasp of data governance, metadata tagging, and role-based access control
  • Proven ability to mentor and grow engineers in a matrixed or global environment
  • Strong verbal and written communication skills, with the ability to operate cross-functionally
  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
  • Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance)
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms
  • Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines)
  • Architect data models and re-usable layers consumed by multiple downstream pods
  • Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks
  • Mentoring and coaching team
  • Partner with product and platform leaders to ensure engineering consistency and delivery excellence
  • Act as an L3 escalation point for operational data issues impacting foundational pipelines
  • Own engineering best practices, sprint planning, and quality across the Enablement pod
  • Contribute to platform discussions and architectural decisions across regions
  • Fulltime
Read More
Arrow Right

Data Analytics Engineer

SDG Group is expanding its global Data & Analytics practice and is seeking a mot...
Location
Location
Egypt , Cairo
Salary
Salary:
Not provided
sdggroup.com Logo
SDG
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in computer science, Engineering, Information Systems, or related field
  • Hands-on experience in DataOps / Data Engineering
  • Strong knowledge in Databricks OR Snowflake (one of them is mandatory)
  • Proficiency in Python and SQL
  • Experience with Azure data ecosystem (ADF, ADLS, Synapse, etc.)
  • Understanding of CI/CD practices and DevOps for data.
  • Knowledge of data modeling, orchestration frameworks, and monitoring tools
  • Strong analytical and troubleshooting skills
  • Eagerness to learn and grow in a global consulting environment
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable and reliable data pipelines following DataOps best practices
  • Work with modern cloud data stacks using Databricks (Spark, Delta Lake) or Snowflake (Snow pipe, tasks, streams)
  • Develop and optimize ETL/ELT workflows using Python, SQL, and orchestration tools
  • Work with Azure data services (ADF, ADLS, Azure SQL, Azure Functions)
  • Implement CI/CD practices using Azure DevOps or Git-based workflows
  • Ensure data quality, consistency, and governance across all delivered data solutions
  • Monitor and troubleshoot pipelines for performance and operational excellence
  • Collaborate with international teams, architects, and analytics consultants
  • Contribute to technical documentation and solution design assets
What we offer
What we offer
  • Remote working model aligned with international project needs
  • Opportunity to work on European and global engagements
  • Mentorship and growth paths within SDG Group
  • A dynamic, innovative, and collaborative environment
  • Access to world-class training and learning platforms
  • Fulltime
Read More
Arrow Right

Data Engineer

At Allianz Technology, we power the digital transformation of the Allianz Group....
Location
Location
Spain , Barcelona
Salary
Salary:
Not provided
https://www.allianz.com Logo
Allianz
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficient in Python for development, automation, and data processing
  • strong experience using DBT for data transformation and management in cloud environments
  • solid experience with Azure Data Factory (ADF) for orchestrating ETL workflows
  • familiarity with Jenkins for CI/CD workflows
  • expertise in SQL for querying databases and building data models
  • ability to design and implement effective data models, create databases, and optimize performance
  • knowledge of Agile Methodology and familiarity with data governance and security best practices with strong problem-solving, troubleshooting and collaboration skills, as well as the ability to thrive in a dynamic environment
  • experience with data warehousing and cloud-based platforms (Azure)
  • familiarity with APIs for integrating third-party systems into data workflows
  • experience in data modeling and data analytics platforms
Job Responsibility
Job Responsibility
  • Design and implement scalable ETL pipelines using DBT and Azure Data Factory (ADF) to process and transform data
  • integrate DBT with ADF runtime environments and leverage APIs for seamless execution
  • write high-quality, well-documented Python code for data transformation, extraction, and automation processes
  • utilize Visual Studio Code for efficient development and manage project versioning with GitHub
  • collaborate closely with the team to design and maintain SQL-based data models and data warehouse solutions
  • collaborate with cross-functional teams to understand data requirements and ensure data accessibility and usability
  • implement Jenkins for continuous integration and delivery (CI/CD) to automate data pipeline workflows
  • create and maintain automated workflows to enhance business intelligence, reporting, and data insights
  • troubleshoot, resolve, and optimize data pipeline issues to support large-scale data processing and ensure consistent data quality
  • proactively monitor data pipelines, ensuring data accuracy, consistency, and reliability
What we offer
What we offer
  • Hybrid work model which recognizes the value of striking a balance between in-person collaboration and remote working incl. up to 25 days per year working from abroad
  • company bonus scheme, pension, employee shares program and multiple employee discounts (details vary by location)
  • career development and digital learning programs to international career mobility, we offer lifelong learning for our employees worldwide and an environment where innovation, delivery and empowerment are fostered
  • flexible working, health and wellbeing offers (including healthcare and parental leave benefits) support to balance family and career and help our people return from career breaks with experience that nothing else can teach
  • Fulltime
Read More
Arrow Right

Cloud Big-data Engineer

An expert with 4-5 years of experience in Hadoop ecosystem and cloud- (AWS ecosy...
Location
Location
United States , Starkville; Dover; Minneapolis
Salary
Salary:
45.00 USD / Hour
phasorsoft.com Logo
PhasorSoft Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4-5 years of experience in Hadoop ecosystem and cloud (AWS ecosystem/Azure)
  • Experience working with in-memory computing using R, Python, Spark, PySpark, Kafka, and Scala
  • Experience in parsing and shredding XML and JSON, shell scripting, and SQL
  • Experience working with Hadoop ecosystem - HDFS, Hive
  • Experience working with AWS ecosystem - S3, EMR, EC2, Lambda Cloud Formation, Cloud Watch, SNS/SQS
  • Experience with Azure – Azure Data Factory (ADF)
  • Experience working with SQL and No SQL databases
  • Experience designing and developing data sourcing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking, and matching
  • Work Authorization: H1, GC, US Citizen
Read More
Arrow Right