CrawlJobs Logo

Data Engineer with Azure Databricks

cigres.com Logo

Cigres

Location Icon

Location:
India , Bengaluru

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Responsibility:

  • Develop and implement scalable data processing solutions using Azure Databricks, including ETL processes that efficiently handle large data volumes
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs and objectives
  • Optimize data storage and retrieval processes to enhance performance and reduce latency
  • Ensure data quality and integrity by implementing best practices for data governance and validation
  • Stay updated with the latest trends and technologies in data engineering, and evaluate their applicability to the organization's needs

Requirements:

  • Data Warehousing
  • Azure Data Engineer
  • Azure Data Lake
  • Data Bricks
  • Pyspark
  • Python
  • 4 to 6 Years experience
  • B Tech education
  • Proficiency in Databricks and experience with Spark for big data processing
  • Solid programming skills in Python or Scala
  • Knowledge of SQL and experience with relational databases and data warehousing concepts
  • Familiarity with cloud platforms - Azure
  • Understanding of data modeling, ETL processes, and data integration methodologies
  • Excellent problem-solving skills and attention to detail
  • Strong communication and collaboration abilities

Additional Information:

Job Posted:
January 07, 2026

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Engineer with Azure Databricks

Azure Data Engineer

Join AlgebraIT, a premier IT consulting firm in Austin, Texas! We are looking fo...
Location
Location
United States , Austin
Salary
Salary:
Not provided
algebrait.com Logo
AlgebraIT
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 3 years of relevant experience
  • Bachelor’s degree in Computer Science or a related field
  • 3+ years of experience in data engineering
  • Proficiency in Azure services (e.g., Data Lake, Databricks, Synapse)
  • Strong knowledge of Python, SQL, and data modeling
  • Excellent problem-solving skills and attention to detail
Job Responsibility
Job Responsibility
  • Design, develop, and maintain data pipelines using Azure Data Factory and other Azure services
  • Collaborate with data scientists and analysts to support data initiatives
  • Ensure the reliability, availability, and performance of the data infrastructure
  • Optimize data flow and pipeline architecture
  • Implement best practices for data security and governance
  • Monitor data pipelines and troubleshoot issues as needed
  • Develop and maintain data integration solutions
  • Document data processes and workflows
  • Stay up-to-date with Azure technology advancements
  • Provide support for data-related technical issues
  • Fulltime
Read More
Arrow Right

Azure Data Engineer

As an Azure Data Engineer, you will be expected to design, implement, and manage...
Location
Location
India , Hyderabad / Bangalore
Salary
Salary:
Not provided
quadranttechnologies.com Logo
Quadrant Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Information Technology, or a related field
  • Hands-on experience in writing complex T-SQL queries and stored procedures
  • Good experience in data integration and database development
  • Proficiency in T-SQL and Spark SQL/PySpark. (Synapse/ Databricks)
  • Extensive experience with Azure Data Factory
  • Excellent problem-solving skills and attention to detail
  • 5 – 8 years experience
  • Proven track record of writing complex SQL stored procedures with implementing OTLP database solutions (using Microsoft SQL Server)
  • Experience with Azure Synapse / PySpark / Azure Databricks for big data processing
  • Expertise in T-SQL, Dynamic SQL, Spark SQL, and ability to write complex stored procedures
Job Responsibility
Job Responsibility
  • Collaborate with cross-functional teams to gather, analyze, and document business requirements for data integration projects
  • Write complex stored procedures to support data transformation and to implement business validation logic
  • Develop and maintain robust data pipelines using Azure Data Factory ensuring seamless data flow between systems
  • Work closely with the team to ensure data quality, integrity, and accuracy across all systems
  • Contribute to the enhancement and optimization of OLTP systems
  • Fulltime
Read More
Arrow Right

Azure Data Engineer

Experience: 3-6+ Years Location: Noida/Gurugram/Remote Skills: PYTHON, PYSPARK...
Location
Location
India , Noida; Gurugram
Salary
Salary:
Not provided
nexgentechsolutions.com Logo
NexGen Tech Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3-6+ Years experience
  • PYTHON
  • PYSPARK
  • SQL
  • AZURE DATA FACTORY
  • DATABRICKS
  • DATA LAKE
  • AZURE FUNCTION
  • DATA PIPELINE
Job Responsibility
Job Responsibility
  • Design and engineer the cloud/big data solutions, develop a modern data analytics lake
  • Develop & maintain data pipelines for batch & stream processing using modern cloud or open source ETL/ELT tools
  • Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT
  • Implement continuous integration, continuous deployment, DevOps practice
  • Create, document, and manage data guidelines, governance, and lineage metrics
  • Technically lead, design and develop distributed, high-throughput, low-latency, highly available data processing and data systems
  • Build monitoring tools for server-side components
  • work cohesively in India-wide distributed team
  • Identify, design, and implement internal process improvements and tools to automate data processing and ensure data integrity while meeting data security standards
  • Build tools for better discovery and consumption of data for various consumption models in the organization – DataMarts, Warehouses, APIs, Ad Hoc Data explorations
  • Fulltime
Read More
Arrow Right

Senior Azure Data Engineer

Seeking a Lead AI DevOps Engineer to oversee design and delivery of advanced AI/...
Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 6 years of professional experience in the Data & Analytics area
  • 1+ years of experience (or acting as) in the Senior Consultant or above role with a strong focus on data solutions build in Azure and Databricks/Synapse/(MS Fabric is nice to have)
  • Proven experience in Azure cloud-based infrastructure, Databricks and one of SQL implementation (e.g., Oracle, T-SQL, MySQL, etc.)
  • Proficiency in programming languages such as SQL, Python, PySpark is essential (R or Scala nice to have)
  • Very good level of communication including ability to convey information clearly and specifically to co-workers and business stakeholders
  • Working experience in the agile methodologies – supporting tools (JIRA, Azure DevOps)
  • Experience in leading and managing a team of data engineers, providing guidance, mentorship, and technical support
  • Knowledge of data management principles and best practices, including data governance, data quality, and data integration
  • Good project management skills, with the ability to prioritize tasks, manage timelines, and deliver high-quality results within designated deadlines
  • Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues
Job Responsibility
Job Responsibility
  • Act as a senior member of the Data Science & AI Competency Center, AI Engineering team, guiding delivery and coordinating workstreams
  • Develop and execute a cloud data strategy aligned with organizational goals
  • Lead data integration efforts, including ETL processes, to ensure seamless data flow
  • Implement security measures and compliance standards in cloud environments
  • Continuously monitor and optimize data solutions for cost-efficiency
  • Establish and enforce data governance and quality standards
  • Leverage Azure services, as well as tools like dbt and Databricks, for efficient data pipelines and analytics solutions
  • Work with cross-functional teams to understand requirements and provide data solutions
  • Maintain comprehensive documentation for data architecture and solutions
  • Mentor junior team members in cloud data architecture best practices
What we offer
What we offer
  • Stable employment
  • “Office as an option” model
  • Workation
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs
  • Upskilling support
Read More
Arrow Right

Azure Data Engineer

At LeverX, we have had the privilege of delivering over 1,500 projects for vario...
Location
Location
Uzbekistan, Georgia
Salary
Salary:
Not provided
leverx.com Logo
LeverX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience as a Data Engineer with strong expertise in Azure services (e.g., Azure Data Factory, Azure SQL Database, Azure Synapse, Microsoft Fabric, and Azure Cosmos DB
  • Advanced SQL skills, including complex query development, optimization, and troubleshooting
  • Strong knowledge of indexing, partitioning, and query execution plans to ensure scalability and performance
  • Proven expertise in database modeling, schema design, and normalization/denormalization strategies
  • Ability to design and optimize data architectures to support both transactional and analytical workloads
  • Proficiency in at least one programming language such as Python, C#, or Scala
  • Strong background in cloud-based data storage and processing (e.g., Azure Data Lake, Databricks, or equivalent) and data warehouse platforms (e.g., Snowflake)
  • English B2+
Job Responsibility
Job Responsibility
  • Design, develop, and maintain efficient and scalable data architectures and workflows
  • Build and optimize SQL-based solutions for data transformation, extraction, and loading (ETL) processes
  • Collaborate closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver effective solutions
  • Manage and optimize data storage platforms, including databases, data lakes, and data warehouses
  • Troubleshoot and resolve data-related issues, ensuring accuracy, integrity, and performance across all systems
What we offer
What we offer
  • Projects in different domains: healthcare, manufacturing, e-commerce, fintech, etc
  • Projects for every taste: Startup products, enterprise solutions, research & development initiatives, and projects at the crossroads of SAP and the latest web technologies
  • Global clients based in Europe and the US, including Fortune 500 companies
  • Employment security: We hire for our team, not just a specific project. If your project ends, we will find you a new one
  • Healthy work atmosphere: On average, our employees stay with the company for 4+ years
  • Market-based compensation and regular performance reviews
  • Internal expert communities and courses
  • Perks to support your growth and well-being
Read More
Arrow Right

Azure Data Engineer

As an Azure Data Engineer, you will design and maintain scalable data pipelines ...
Location
Location
Salary
Salary:
Not provided
aciinfotech.com Logo
ACI Infotech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3–5 years of experience as a Data Engineer with Azure ecosystem
  • Strong skills in SQL, Databricks, and Python
  • Hands-on experience with Azure Data Factory (ADF)
  • Power BI experience preferred
  • Familiarity with Delta Lake and/or Azure Synapse is a plus
Job Responsibility
Job Responsibility
  • Develop, manage, and optimize ADF pipelines
  • Design and implement Databricks notebooks for ETL processes
  • Write and optimize SQL scripts for large-scale datasets
  • Collaborate with BI teams to support dashboard and reporting solutions
  • Ensure data quality, security, and compliance with governance policies
  • Fulltime
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • In-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • Expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • Advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • Advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • Solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • Expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake to ensure data quality, consistency, and historical tracking
  • Efficient implementation of the Lakehouse architecture on Databricks, combining best practices from DWH and Data Lake
  • Optimize Databricks clusters, Spark operations, and Delta tables to reduce latency and computational costs
  • Design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables
  • Implement and manage Unity Catalog for centralized data governance, data security and data lineage
  • Define and implement data quality standards and rules to maintain data integrity
  • Develop and manage complex workflows using Databricks Workflows or external tools to automate pipelines
  • Integrate Databricks pipelines into CI/CD processes
  • Work closely with Data Scientists, Analysts, and Architects to understand business requirements and deliver optimal technical solutions
What we offer
What we offer
  • Full access to foreign language learning platform
  • Personalized access to tech learning platforms
  • Tailored workshops and trainings to sustain your growth
  • Medical insurance
  • Meal tickets
  • Monthly budget to allocate on flexible benefit platform
  • Access to 7 Card services
  • Wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Data Engineer (Azure)

Fyld is a Portuguese consulting company specializing in IT services. We bring hi...
Location
Location
Portugal , Lisboa
Salary
Salary:
Not provided
https://www.fyld.pt Logo
Fyld
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Software Engineering, Data Engineering, or related
  • Relevant certifications in Azure, such as Microsoft Certified: Azure Data Engineer Associate or Microsoft Certified: Azure Solutions Architect Expert
  • Hands-on experience with Azure services, especially those related to data engineering and analytics, such as Azure SQL Database, Azure Data Lake, Azure Synapse Analytics, Azure Databricks, Azure Data Factory, among others
  • Familiarity with Azure storage and compute services, including Azure Blob Storage, Azure SQL Data Warehouse, Azure HDInsight, and Azure Functions
  • Proficiency in programming languages such as Python, SQL, or C# for developing data pipelines, data processing, and automation
  • Knowledge of data manipulation and transformation techniques using tools like Azure Databricks or Apache Spark
  • Experience in data modeling, data cleansing, and data transformation for analytics and reporting purposes
  • Understanding of data architecture principles and best practices, including data lake architectures, data warehousing, and ETL/ELT processes
  • Knowledge of security and compliance features offered by Azure, including data encryption, role-based access control (RBAC), and Azure Security Center
  • Excellent communication skills, both verbal and written, to collaborate effectively with technical and non-technical teams
  • Fulltime
Read More
Arrow Right