CrawlJobs Logo
Briefcase Icon
Category Icon

Data Engineer - Pyspark India Jobs (Hybrid work)

71 Job Offers

Filters
Sr. Data Engineer
Save Icon
Join our team as a Senior Data Engineer in Hyderabad. Design and build scalable data pipelines using Python, SQL, and AWS. Collaborate with cross-functional teams to enable data-driven decisions and optimize our Snowflake-based infrastructure.
Location Icon
Location
India , Hyderabad
Salary Icon
Salary
Not provided
highspot.com Logo
Highspot
Expiration Date
Until further notice
Tech Lead Manager - Data Engineering
Save Icon
Lead and grow a data engineering team in Bangalore, India, while remaining hands-on with platform architecture. You will build scalable data solutions using tools like Beam and Spark, and partner with global teams across engineering and GTM. This role blends technical leadership (~75%) with team ...
Location Icon
Location
India , Bangalore
Salary Icon
Salary
Not provided
glean.com Logo
Glean
Expiration Date
Until further notice
Data Center Operations IMAC Engineer
Save Icon
Join our team in Mumbai as a Data Center Operations IMAC Engineer. You will ensure the stability and resilience of critical banking infrastructure through proactive monitoring, preventative maintenance, and complex issue resolution. The role requires strong expertise in hardware/software support,...
Location Icon
Location
India , Mumbai
Salary Icon
Salary
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Data Center Operations IMAC Engineer
Save Icon
Join our team in Pune as a Data Center Operations IMAC Engineer. You will ensure the stability and resilience of critical banking infrastructure through proactive monitoring, preventative maintenance, and complex issue resolution. The role requires strong troubleshooting skills across hardware, s...
Location Icon
Location
India , Pune
Salary Icon
Salary
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Data Engineer - SQL, Snowflake
Save Icon
Join Barclays in Bengaluru as a Data Engineer specializing in SQL and Snowflake. You will build and maintain robust data pipelines, warehouses, and lakes, ensuring secure and scalable data architecture. The role requires hands-on SQL/Snowflake expertise and Unix/Linux scripting knowledge. We offe...
Location Icon
Location
India , Bengaluru
Salary Icon
Salary
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
DevOps Engineer - Data Platforms
Save Icon
Join our Data Platforms team as a DevOps Engineer in Hyderabad. Design and manage scalable AWS infrastructure using Terraform and IaC. Automate data pipelines with services like S3, Glue, and EMR, leveraging Python and SQL. Enjoy flexible work, career mentoring, and benefits like an Employee Shar...
Location Icon
Location
India , Hyderabad
Salary Icon
Salary
Not provided
alterdomus.com Logo
Alter Domus
Expiration Date
Until further notice
Data Scientist (AI Engineer)
Save Icon
Join Barclays in Noida as a Data Scientist (AI Engineer). Develop and deploy cutting-edge Generative AI, Agentic AI, and NLP models using AWS cloud technologies. Build scalable AI solutions and data pipelines with Python and SQL. Enjoy a hybrid model, modern workspaces, and comprehensive wellness...
Location Icon
Location
India , Noida
Salary Icon
Salary
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Data Engineer
Save Icon
Join our global team in Pune as a Data Engineer. Design and build scalable data pipelines using SQL, Python, and cloud platforms (GCP/AWS/Azure). Transform raw data for analytics and machine learning, ensuring quality and efficiency. Enjoy a collaborative environment with career growth in cutting...
Location Icon
Location
India , Pune
Salary Icon
Salary
Not provided
vodafone.com Logo
Vodafone
Expiration Date
Until further notice
Gcp Data Engineer
Save Icon
Seeking a certified Google Cloud Data Engineer in Pune. Design and manage Big Data pipelines using GCP tools like BigQuery, Dataflow, and Dataproc. Build real-time systems with Kafka and support MLOps. Requires 3-5 years' experience with DevOps, containers, and telecom knowledge.
Location Icon
Location
India , Pune
Salary Icon
Salary
Not provided
vodafone.com Logo
Vodafone
Expiration Date
Until further notice
Data Engineer
Save Icon
Seeking a Data Engineer in Pune to build and maintain scalable data pipelines on Google Cloud Platform. Utilize Python, BigQuery, and Cloud Composer to develop ETL/ELT processes and ensure data integrity. Collaborate with cross-functional teams to deliver high-quality data products that drive bus...
Location Icon
Location
India , Pune
Salary Icon
Salary
Not provided
vodafone.com Logo
Vodafone
Expiration Date
Until further notice
Big Data Engineer
Save Icon
Join our Data & Analytics team as a Big Data Engineer on a 6-month contract. Design and maintain scalable data pipelines using Apache Spark, Hadoop, and Kafka on cloud platforms (AWS/GCP/Azure). This role, based in Pune, Bangalore, or Ahmedabad, offers exposure to large-scale projects in a global...
Location Icon
Location
India , Pune
Salary Icon
Salary
Not provided
vodafone.com Logo
Vodafone
Expiration Date
Until further notice
Are you a data architect with a passion for building robust, scalable systems? Your search for Data Engineer - PySpark jobs ends here. A Data Engineer specializing in PySpark is a pivotal role in the modern data ecosystem, responsible for constructing the foundational data infrastructure that powers analytics, machine learning, and business intelligence. These professionals are the master builders of the data world, transforming raw, unstructured data into clean, reliable, and accessible information for data scientists, analysts, and business stakeholders. If you are seeking jobs where you can work with cutting-edge big data technologies to solve complex data challenges at scale, this is your domain. In this profession, typical responsibilities revolve around the entire data pipeline lifecycle. Data Engineers design, develop, test, and maintain large-scale data processing systems. A core part of their daily work involves writing efficient, scalable code using PySpark, the Python library for Apache Spark, to perform complex ETL (Extract, Transform, Load) or ELT processes. They build and orchestrate data pipelines that ingest data from diverse sources—such as databases, APIs, and log files—into data warehouses like Snowflake or data lakes on cloud platforms like AWS, Azure, and GCP. Ensuring data quality and reliability is paramount; they implement robust data validation, monitoring, and observability frameworks to guarantee that data is accurate, timely, and trusted. Furthermore, they are tasked with optimizing the performance and cost of these data systems, fine-tuning Spark jobs for maximum efficiency, and automating deployment processes through CI/CD and Infrastructure as Code (IaC) practices. To excel in Data Engineer - PySpark jobs, a specific and powerful skill set is required. Mastery of Python and PySpark is non-negotiable, as it is the primary tool for distributed data processing. Profound knowledge of SQL is essential for data manipulation and querying. Experience with workflow orchestration tools like Apache Airflow is a common requirement to manage complex pipeline dependencies. A deep understanding of cloud data solutions (AWS, GCP, Azure) and platforms like Databricks is highly valued. Beyond technical prowess, successful candidates possess strong problem-solving abilities to debug and optimize data flows, a keen eye for system design and architecture, and excellent collaboration skills to work with cross-functional teams, including data scientists and business analysts. They are often expected to mentor junior engineers and contribute to establishing data engineering best practices and standards across an organization. If you are ready to build the future of data, explore the vast array of Data Engineer - PySpark jobs available and take the next step in your impactful career.

Filters

×
Countries
Category
Location
Work Mode
Salary