CrawlJobs Logo
Briefcase Icon
Category Icon

Data Engineer - Pyspark India, Gurugram Jobs

9 Job Offers

Filters
Lead Data Engineer
Save Icon
Lead Data Engineer role for our Technology team in Hyderabad, Bangalore, or Gurugram. Architect and optimize scalable data pipelines using Spark, Kafka, and Python. Drive engineering best practices while leading a team to build robust data lakes and warehouses. Fintech or cloud experience is a plus.
Location Icon
Location
India , Hyderabad, Bangalore, Gurugram
Salary Icon
Salary
Not provided
arcesium.com Logo
Arcesium
Expiration Date
Until further notice
Lead Data Engineer
Save Icon
Lead Data Engineer role in Gurugram, focusing on designing and migrating scalable data pipelines to Databricks using Python and PySpark. You will implement Medallion Architecture with Delta Lake, optimize Spark workloads, and manage ETL/ELT processes. The position requires strong data modeling, S...
Location Icon
Location
India , Gurugram
Salary Icon
Salary
Not provided
spectramedix.com Logo
SpectraMedix
Expiration Date
Until further notice
Senior Data Engineer with Python
Save Icon
Join Intellectsoft as a Senior Data Engineer in Bengaluru or Gurugram. Design scalable PySpark data pipelines, optimize ETL workflows, and collaborate with data scientists. Leverage your Python, Spark, and cloud expertise on impactful AI projects. Enjoy flexible hours, Udemy courses, and a clear ...
Location Icon
Location
India , Bengaluru; Gurugram
Salary Icon
Salary
Not provided
intellectsoft.net Logo
Intellectsoft
Expiration Date
Until further notice
Lead Data Engineer
Save Icon
Lead Data Engineer role at Circle K's global Fortune 200 company. Design and maintain scalable data pipelines using ADF, SQL, Python, Databricks, and Snowflake. Mentor a team in Gurugram while driving data governance and reusable assets. Enable intelligent decision-making across a vast retail eco...
Location Icon
Location
India , Gurugram
Salary Icon
Salary
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Senior Data Engineer
Save Icon
Join Circle K's digital transformation as a Senior Data Engineer in Gurugram. Architect and implement cloud data pipelines using Azure, Snowflake, and Python/SQL. Drive actionable business outcomes by building and optimizing ETL processes for our data lake and warehouse. A collaborative role for ...
Location Icon
Location
India , Gurugram
Salary Icon
Salary
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Data Engineer
Save Icon
Join Circle K's digital transformation as a Data Engineer in Gurugram. You will design and optimize ETL pipelines using Azure, Snowflake, and Python/SQL to drive actionable business outcomes. This role requires strong experience in cloud data platforms and data warehousing within a collaborative ...
Location Icon
Location
India , Gurugram
Salary Icon
Salary
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Senior Data Engineer
Save Icon
Join Circle K's cloud-first strategy as a Senior Data Engineer in Gurugram. You will design and optimize ETL pipelines using Azure, Snowflake, and Python/SQL to unlock enterprise data. This role requires 5+ years of advanced data engineering experience in cloud platforms. Partner with stakeholder...
Location Icon
Location
India , Gurugram
Salary Icon
Salary
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Lead Data Engineer
Save Icon
Lead Data Engineer role in Gurugram, India. Lead a team while designing scalable data pipelines using ADF, Databricks, SQL, Python, and Snowflake. Drive data governance and mentor engineers in a global environment. Requires 8-10 years' experience and expertise in CI/CD and data architecture.
Location Icon
Location
India , Gurugram
Salary Icon
Salary
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Azure Data Engineer
Save Icon
Seeking an experienced Azure Data Engineer with 3-6+ years' expertise in Python, PySpark, and SQL. You will design modern data lakes and build robust pipelines using Azure Data Factory, Databricks, and Data Lake. This role offers a flexible location in Noida, Gurugram, or remote, focusing on clou...
Location Icon
Location
India , Noida; Gurugram
Salary Icon
Salary
Not provided
nexgentechsolutions.com Logo
NexGen Tech Solutions
Expiration Date
Until further notice
Are you a data architect with a passion for building robust, scalable systems? Your search for Data Engineer - PySpark jobs ends here. A Data Engineer specializing in PySpark is a pivotal role in the modern data ecosystem, responsible for constructing the foundational data infrastructure that powers analytics, machine learning, and business intelligence. These professionals are the master builders of the data world, transforming raw, unstructured data into clean, reliable, and accessible information for data scientists, analysts, and business stakeholders. If you are seeking jobs where you can work with cutting-edge big data technologies to solve complex data challenges at scale, this is your domain. In this profession, typical responsibilities revolve around the entire data pipeline lifecycle. Data Engineers design, develop, test, and maintain large-scale data processing systems. A core part of their daily work involves writing efficient, scalable code using PySpark, the Python library for Apache Spark, to perform complex ETL (Extract, Transform, Load) or ELT processes. They build and orchestrate data pipelines that ingest data from diverse sources—such as databases, APIs, and log files—into data warehouses like Snowflake or data lakes on cloud platforms like AWS, Azure, and GCP. Ensuring data quality and reliability is paramount; they implement robust data validation, monitoring, and observability frameworks to guarantee that data is accurate, timely, and trusted. Furthermore, they are tasked with optimizing the performance and cost of these data systems, fine-tuning Spark jobs for maximum efficiency, and automating deployment processes through CI/CD and Infrastructure as Code (IaC) practices. To excel in Data Engineer - PySpark jobs, a specific and powerful skill set is required. Mastery of Python and PySpark is non-negotiable, as it is the primary tool for distributed data processing. Profound knowledge of SQL is essential for data manipulation and querying. Experience with workflow orchestration tools like Apache Airflow is a common requirement to manage complex pipeline dependencies. A deep understanding of cloud data solutions (AWS, GCP, Azure) and platforms like Databricks is highly valued. Beyond technical prowess, successful candidates possess strong problem-solving abilities to debug and optimize data flows, a keen eye for system design and architecture, and excellent collaboration skills to work with cross-functional teams, including data scientists and business analysts. They are often expected to mentor junior engineers and contribute to establishing data engineering best practices and standards across an organization. If you are ready to build the future of data, explore the vast array of Data Engineer - PySpark jobs available and take the next step in your impactful career.

Filters

×
Category
Location
Work Mode
Salary