This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Lead Data Engineer to serve as both a technical leader and people coach for our India-based Data Enablement pod. This role will oversee the design, delivery, and maintenance of critical cross-functional datasets and reusable data assets while also managing a group of talented engineers in India.
Job Responsibility:
Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms
Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines)
Architect data models and re-usable layers consumed by multiple downstream pods
Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks
Mentoring and coaching team
Partner with product and platform leaders to ensure engineering consistency and delivery excellence
Act as an L3 escalation point for operational data issues impacting foundational pipelines
Own engineering best practices, sprint planning, and quality across the Enablement pod
Contribute to platform discussions and architectural decisions across regions
Requirements:
Bachelor’s or master’s degree in computer science, Engineering, or related field
8-10 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark
Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse
Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices
Solid grasp of data governance, metadata tagging, and role-based access control
Proven ability to mentor and grow engineers in a matrixed or global environment
Strong verbal and written communication skills, with the ability to operate cross-functionally
Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance)
Hands on experience in Databases like (Azure SQL DB, Snowflake, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting
Nice to have:
Certifications in Azure, Databricks, or Snowflake are a plus
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.