This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Senior Data Engineer role requires a strong background in Python and PySpark, with at least 6 years of experience in data engineering. The candidate will design scalable data processing pipelines and work with Google Cloud Platform services. Proficiency in SQL and experience with AI/ML workflows are essential. This position is based in Bangalore, requiring in-office presence three days a week.
Job Responsibility:
Design scalable data processing pipelines
Work with Google Cloud Platform services
Requirements:
6+ years as a Data Engineer
Strong Python programming skills, including data manipulation with libraries like Pandas, NumPy, and scikit-learn
Strong experience applying Data Engineering techniques and backend development in a production environment
Demonstrated expertise in building scalable data processing pipelines using PySpark and distributed computing frameworks
Hands-on, practical experience with Google Cloud Platform (GCP). GCP-specific services include Cloud Composer, Cloud Run, Cloud Functions, BigQuery, Dataform, Pub/Sub, etc
Significant hands-on experience with CI/CD pipelines, DevOps tooling, and modern engineering practices
SQL skills and strong proficiency in Python and PySpark
Experience supporting AI/ML workflows
Experience working with large-scale data processing, ETL/ELT pipelines, and data transformation using PySpark
Backend Python experience with Flask, Django, or FastAPI