CrawlJobs Logo

Sr Data Platform Lead

amgen.com Logo

Amgen

Location Icon

Location:
India , Hyderabad

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives.

Job Responsibility:

  • Act as a platform lead for delivery of data platform capabilities that enable next-gen data platform architecture, with a strong focus on Databricks platform and DQ platform features and services
  • Evaluate and enable Databricks platform capabilities through technical assessments and proof‑of‑concepts (PoCs), ensuring alignment with next-gen data platform architectural patterns and enterprise standards
  • Design, build, and productionize reusable platform frameworks, accelerators, and reference implementations that can be leveraged by next-gen data platform delivery teams (excluding ownership of data pipeline architecture or implementation)
  • Enable data governance, metadata layer, and data bundle capabilities by designing and implementing platform‑level integrations between Databricks and Collibra, Amgen’s enterprise data governance platform
  • Build platform‑level tooling and automation to support proactive governance, cost optimization, and best‑practice enforcement across Databricks and related data platform services
  • Define and enable platform observability capabilities, including KPIs, metrics, and telemetry for monitoring performance, usage, reliability, and cost of Databricks services
  • Identify and implement governed self‑service platform capabilities for data engineers through self-service portal, using Python‑based microservices deployed on Docker and Kubernetes
  • Lead user enablement and adoption initiatives, including onboarding content, guided learning experiences, workshops, and best‑practice sharing for the Databricks user community
  • Drive engineering excellence and adoption of AI across platform capabilities and solutions built, promoting modern engineering practices, automation, and responsible use of AI‑driven features
  • Enable key business programs and strategic initiatives by translating initiative‑driven requirements into scalable, reusable data platform capabilities, in alignment with next-gen data platform principles
  • Collaborate closely with Enterprise Data Architecture (EDA), governance, platform operations, and delivery teams to ensure platform capabilities are aligned, consumable, and enterprise‑ready

Requirements:

  • Master's degree OR Bachelor's degree in computer science or engineering field and 8 to 13 years of relevant experience
  • Strong hands‑on experience with various capabilities of Databricks, from Compute to Storage and from Unity Catalog to Data Engineering to BI and AI/ML capabilities, with a focus on governance and enterprise enablement
  • Proven hands‑on experience with cloud platforms, with strong preference for AWS (experience with Azure or GCP also acceptable)
  • Experience leading Data Quality platform initiatives (e.g., Ataccama, Monte Carlo), including tool evaluation, implementation, enterprise-wide adoption, and integration with enterprise DQ solutions
  • Experience owning and managing Databricks platform environments, including workspace architecture, environment strategy (dev/test/prod), and lifecycle management at scale
  • Proven ability to establish and enforce platform standards and operating models, including cluster policies, cost management, and workload orchestration frameworks
  • Strong focus on platform enablement and developer experience, including building reusable frameworks, defining best practices, and supporting engineering teams in adopting the platform effectively
  • Exposure to AI/ML capabilities on Databricks, including enabling AI‑driven features or accelerating adoption of AI‑assisted engineering practices
  • Solid knowledge of SQL and relational / dimensional data modelling, sufficient to support platform integrations, governance, and observability use cases
  • Experience working with core AWS services such as EKS, EC2, S3, Lambda, Glue, EMR, RDS, and Redshift/Spectrum, particularly in platform or shared‑services contexts
  • Strong analytical and problem‑solving skills, with the ability to design scalable, reusable solutions for complex data platform challenges
  • Experience working in Agile delivery environments, with exposure to tools such as Jira or Jira Align

Nice to have:

  • Experience contributing to or enabling self‑service platforms or portals (front‑end or API‑driven), including collaboration with front‑end teams (e.g., React‑based portals)
  • Proficiency in Python‑based microservices development, including designing and deploying APIs and services that enable platform capabilities
  • Experience building platform APIs and services with Databricks SDKs and REST APIs for provisioning, managing, or governing Databricks environments and managing workspaces, clusters, jobs, users, permissions, and related platform features
  • Experience with DQplatforms like Ataccama or Monte Carlo used in association with Data Engineering platforms
  • Familiarity with platform observability, telemetry, or cost‑optimization patterns for large‑scale data platforms
  • Experience enabling data governance, metadata, or lineage integrations between data platforms and enterprise governance tools (e.g., Collibra)
  • Experience working with SQL/NoSQL databases and vector databases, particularly in the context of LLM‑enabled or AI‑assisted platform solutions
  • Experience with CI/CD pipelines, containerization (Docker), and Kubernetes/EKS, and applying these practices to enterprise platform services
  • Strong understanding of software engineering best practices, including version control, automated testing, continuous integration, and production‑grade design
  • Certifications (preferred but not required): AWS Certified Data Engineer, Databricks Certification, SAFe Agile Certification

Additional Information:

Job Posted:
May 10, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Sr Data Platform Lead

Sr. Data Engineer

We are looking for a Sr. Data Engineer to join our growing Quality Engineering t...
Location
Location
Salary
Salary:
Not provided
dataideology.com Logo
Data Ideology
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Information Systems, or a related field (or equivalent experience)
  • 5+ years of experience in data engineering, data warehousing, or data architecture
  • Expert-level experience with Snowflake, including data modeling, performance tuning, security, and migration from legacy platforms
  • Hands-on experience with Azure Data Factory (ADF) for building, orchestrating, and optimizing data pipelines
  • Strong experience with Informatica (PowerCenter and/or IICS) for ETL/ELT development, workflow management, and performance optimization
  • Deep knowledge of data modeling techniques (dimensional, tabular, and modern cloud-native patterns)
  • Proven ability to translate business requirements into scalable, high-performance data solutions
  • Experience designing and supporting end-to-end data pipelines across cloud and hybrid architectures
  • Strong proficiency in SQL and experience optimizing large-scale analytic workloads
  • Experience working within SDLC frameworks, CI/CD practices, and version control
Job Responsibility
Job Responsibility
  • Ability to collect and understand business requirements and translate those requirements into data models, integration strategies, and implementation plans
  • Lead modernization and migration initiatives to move clients from legacy systems into Snowflake, ensuring functionality, performance and data integrity
  • Ability to work within the SDLC framework in multiple environments and understand the complexities and dependencies of the data warehouse
  • Optimize and troubleshoot ETL/ELT workflows, applying best practices for scheduling, orchestration, and performance tuning
  • Maintain documentation, architecture diagrams, and migration plans to support knowledge transfer and project tracking
What we offer
What we offer
  • PTO Policy
  • Eligibility for Health Benefits
  • Retirement Plan
  • Work from Home
  • Fulltime
Read More
Arrow Right

Big Data Lead Developer (Hadoop/Java/Spark/Scala/Python)

The Applications Development Technology Lead Analyst is a senior level position ...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8-10 years of relevant experience in Big Data Development
  • Sr. Java resource with experience in Java/J2EE, Hadoop, Scala, Hive, Impala, Kafka and Elastic
  • Good knowledge of design patterns and providing solutions to complex design issues, identification and resolution of code issues
  • Hands-On Experience in managing application development using Spark(Scala, Python or Java), SQL, Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.)
  • Experience as senior level in an Applications Development role
  • Proven Solution Delivery skills
  • Basic knowledge of finance industry practices and standards
  • Excellent analytical and process-based skills, i.e. process flow diagrams, business modelling, and functional design
  • Bachelor’s degree/University degree or equivalent experience
Job Responsibility
Job Responsibility
  • Manage one or more Applications in an effort to accomplish established goals as well as conduct personnel duties for team like hiring and training
  • Design and Develop real time and batch data transformation processes using wide range of technologies using Hadoop, Spark Stream, Spark SQL, Python, Hive etc.
  • Design and Develop programs to build functionalities, in the next generation Big-data platform which is also authorize data redistributor
  • Ability to translate architecture and low-level requirements to design and code using Big-data tools and processes
  • Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement
  • Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas
  • Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users
  • Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems
  • Review and analyze proposed technical solutions for projects
  • Impact the Applications Development area through monitoring delivery of end results, participate in budget management, and handling day-to-day staff management issues, including resource management and allocation of work within the team/project
What we offer
What we offer
  • Best-in-class benefits
  • Global Benefits
  • Equal opportunity and affirmative action employer
  • Fulltime
Read More
Arrow Right

Sr. Data Engineer - Snowflake

Data Ideology is seeking a Sr. Snowflake Data Engineer to join our growing team ...
Location
Location
Salary
Salary:
Not provided
dataideology.com Logo
Data Ideology
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience in data engineering, data warehousing, or data architecture
  • 3+ years of hands-on Snowflake experience (performance tuning, data sharing, Snowpark, Snowpipe, etc.)
  • Strong SQL and Python skills, with production experience using dbt
  • Familiarity with cloud platforms (AWS, Azure, or GCP) and modern data tooling (Airflow, Fivetran, Power BI, Looker, Informatica, etc.)
  • Prior experience in a consulting or client-facing delivery role
  • Excellent communication skills, with the ability to collaborate across technical and business stakeholders
  • SnowPro Core Certification required (or willingness to obtain upon hire)
  • advanced Snowflake certifications preferred
Job Responsibility
Job Responsibility
  • Design and build scalable, secure, and cost-effective data solutions in Snowflake
  • Develop and optimize data pipelines using tools such as dbt, Python, CloverDX, and cloud-native services
  • Participate in discovery sessions with clients to gather requirements and translate them into solution designs and project plans
  • Collaborate with engagement managers and account teams to help scope work and provide technical input for Statements of Work (SOWs)
  • Serve as a Snowflake subject matter expert, guiding best practices in performance tuning, cost optimization, access control, and workload management
  • Lead modernization and migration initiatives to move clients from legacy systems into Snowflake
  • Integrate Snowflake with BI tools, governance platforms, and AI/ML frameworks
  • Contribute to internal accelerators, frameworks, and proofs of concept
  • Mentor junior engineers and support knowledge sharing across the team
What we offer
What we offer
  • Flexible Time Off Policy
  • Eligibility for Health Benefits
  • Retirement Plan with Company Match
  • Training and Certification Reimbursement
  • Utilization Based Incentive Program
  • Commission Incentive Program
  • Referral Bonuses
  • Work from Home
  • Fulltime
Read More
Arrow Right

Sr. Data Analyst, Customer Reporting

There's likely a reason you've taken the time out of your busy day to review thi...
Location
Location
United States
Salary
Salary:
100000.00 - 115000.00 USD / Year
pulsepoint.com Logo
PulsePoint
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 5-8 years of experience as a Data Analyst or Data Engineer
  • Strong analytical and problem-solving skills, with attention to detail and a proactive approach to data-driven decision-making
  • Strong SQL skills
  • Knowledge of ETL processes and tools for data integration and transformation
  • Excellent communication and interpersonal skills to collaborate effectively with cross-functional teams
  • Experience with distributed query engines (hive/presto/bigquery)
Job Responsibility
Job Responsibility
  • Lead the modernization of vendor-level reporting by identifying inefficiencies, automating manual workflows, and implementing scalable reporting frameworks
  • Partner with product team on prioritizing and implementing new features into platform reporting ensuring seamless compatibility when migrating clients
  • Partner closely with external vendors to refine data exchange processes, resolve discrepancies, and design integrations that improve data quality and reporting accuracy for both parties
  • Serve as a SME on PulsePoint’s internal data flows, ensuring vendor reporting aligns with platform architecture, compliance requirements, and evolving product capabilities
  • Conduct deep-dive performance analyses across vendor datasets to proactively detect emerging issues, performance degradation, and optimization opportunities
  • Implement monitoring and alerting mechanisms to proactively identify and resolve performance issues and identify bottlenecks and scalability issues
  • Document infrastructure changes, updates, and best practices for knowledge sharing across team, ensuring transparency and continuity
  • Improve vendor onboarding processes by defining clear reporting specs, data validation rules, and integration requirements
  • Actively participate in the development of new data-related features and products, as well as testing them to ensure they meet our high standards of quality and functionality
What we offer
What we offer
  • Comprehensive healthcare with medical, dental, and vision options, and 100%-paid life & disability insurance
  • 401(k) Match
  • Generous paid vacation and sick time
  • Paid parental leave & adoption assistance
  • Annual tuition assistance
  • Better Yourself Wellness program
  • Group volunteer opportunities and fun events
  • Commuter benefits and commuting subsidy
  • A referral bonus program
  • Fulltime
Read More
Arrow Right

Sr Manager - Platform Software, Device Drivers, System Bring-Up

The candidate will be part of a platform software team that delivers high speed,...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS/MS degree
  • 14+ years work experience with at least 5 years in networking area
  • Experience in developing high performance modular platforms
  • 5+ years of experience Leading / Managing high-performance team
  • Development experience on networking products
  • Good understanding of hardware-level details for Optics, PCIe, SPI, I2C, Retimers, FPGA, CPLD, MDIO, Flash Driver
  • Proficiency with device drivers, system bring-up, FreeBSD/Linux internals
  • Understanding of Ethernet, OTN, SONET, etc. technologies
  • Strong technical, analytical, and problem-solving skills
  • Strong in C, C++ programming, OO analysis & design, data structures, and system debugging skills
Job Responsibility
Job Responsibility
  • Leading and managing a team of 15+ platform engineers
  • Working on projects from conception, design, development to productization of hardware and software for network platforms
  • Working closely with product management and cross-functional teams
  • Coming up with resource requirement and project schedule
  • Managing and monitoring progress for projects and mitigating risks
  • Championing quality within and outside the team
  • Interacting with vendors
  • Leading the team in case of customer escalations
  • Managing and nurturing a high performing team
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Software Engineer Sr Staff - Platforms Developer

Designs, develops, troubleshoots and debugs software programs for software enhan...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, electronics, telecommunication engineering, or a related discipline
  • 14 to 19 years of experience in networking and system software development
  • Proficiency in C and C++ programming
  • Familiarity with data structures and system debugging techniques
  • Expertise in Host Complex, System Peripherals & Drivers: CPU complex (x86)
  • PCIe, SPI, I2C, MDIO
  • FPGA, CPLD, Flash Drivers
  • Expertise in Ethernet Interfaces (ranging from 1Gig to 400G+, including 800G, 1.6T), MacSec, Timing, Optics (SFP, QSFP, QDD, OSFP)
  • Expertise in High-speed packet forwarding with network processors, PHYs, and SerDes
  • Cloud Architectures
Job Responsibility
Job Responsibility
  • Collaborate with product managers, architects, and other engineers to define software requirements and specifications
  • Design, implement, and maintain networking and system software components using C and C++ programming languages
  • Conduct object-oriented analysis and design to ensure robust and scalable solutions
  • Debug complex system-level issues, leveraging your deep understanding of fundamental OS concepts (especially in Linux or similar operating systems)
  • Participate in hardware and system-level design discussions, ensuring carrier-class software development
  • Work with Linux device drivers, system bring-up, and the Linux kernel
  • Navigate large codebases effectively
  • Apply strong technical, analytical, and problem-solving skills to enhance software performance and resilience
  • Utilize scripting technologies and modern DevOps practices
  • Collaborate with cross-functional teams, including networking, embedded platform software, and hardware experts
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Sr. Director, Product Management for Data & AI, Security and HSI

The Sr. Director of Product Management will lead the strategy and execution of T...
Location
Location
United States , Bellevue
Salary
Salary:
207700.00 - 280900.00 USD / Year
https://www.t-mobile.com Logo
T-Mobile
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • More than 10 years Product Management experience in an agile software product development environment preferably in a large consumer organization with millions of consumer applications and an advanced level understanding of superior customer experiences
  • 5+ years of experience in AI, data science, and analytics
  • Expertise in LLMs and Generative AI field
  • Experience leading digital applications and successfully launching Data & AI products in the market at large scale
  • Background working with Engineering and a strong understanding of the role of engineering in product development
  • 10+ years of experience in leading strategy, innovation, and data products
  • Advanced knowledge of data tools, techniques, and manipulation including cloud platforms, programming languages, and technology platforms
  • 5+ years leading and developing teams of 5 or more Manager level direct reports with skip-level employees
  • Bachelor's Degree in Computer Science, Engineering, IT or equivalent
  • Demonstrated experience driving enterprise data, analytics, and insights solutions and other technologies
Job Responsibility
Job Responsibility
  • Develop the data products that empower digital customer experiences to be contextual and personal, revamping and redesigning journeys using the new AI experiences
  • Define the overall strategy for how to build and deliver the best experience to our existing users and grow the strategic areas in the T-Life Super App and T-Shop on web
  • Build a Data & AI and Security platform for all Magenta users including Postpaid, Prepaid, TFB Micro and HSI
  • reusing the platform for Metro users to build the same capabilities in the new MyMetro App
  • Enable all the core HSI (High Speed Internet) consumer experiences into one single platform available across Magenta and Metro brands
  • Oversee Security Products for ScamSheild, P360, device diagnostics, VPN, credit monitoring etc. in T-Life to ensure that we can fully fill our promise of “Peace of mind” in the T-Life app
  • Develop and maintain strategic partnerships with senior internal and/or external customers
  • Creates, plans, and owns a portfolio of high-quality products & services through a lifecycle of envisioning/investing/innovating
  • Champion and communicate information and AI product value and other key performance indicators to partners and team members
  • Develop change management and communication plans and execute connected with customer change initiatives
What we offer
What we offer
  • medical, dental and vision insurance
  • a flexible spending account
  • 401(k)
  • employee stock grants
  • employee stock purchase plan
  • paid time off
  • up to 12 paid holidays
  • paid parental and family leave
  • family building benefits
  • back-up care
  • Fulltime
Read More
Arrow Right

Sr. Staff ML Platform Engineer

Machine learning is the crucial enabler for every financial service that EarnIn ...
Location
Location
United States , Mountain View
Salary
Salary:
360000.00 - 440000.00 USD / Year
earnin.com Logo
EarnIn
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or Master’s degree in Computer Science, Engineering, or a related field
  • 8+ years of industry machine learning experience and excellent software engineering skills
  • Strong programming skills in Python, with familiarity in ML frameworks such as TensorFlow or PyTorch
  • Experience with ML cloud platforms such as AWS Sagemaker, Databricks, or GCP Vertex AI
  • Familiarity with data pipelines and workflow management tools
  • Strong communication and collaboration skills
  • Passion for learning and staying updated with the latest industry trends in machine learning and platform engineering
Job Responsibility
Job Responsibility
  • Design, build, and maintain a robust ML platform and tooling ecosystem that supports the entire machine learning lifecycle, from experimentation to production
  • Lead and mentor a team of ML engineers, deeply understanding their workflows to streamline model training, deployment, and monitoring, while ensuring reproducibility and consistency of results
  • Drive scalability, reliability, and cost efficiency of the ML platform, balancing performance with ease of use for scientists and engineers
  • Evaluate and adopt emerging technologies to continually advance the organization’s machine learning capabilities and maintain a competitive edge
  • Champion operational excellence, setting a high bar for engineering quality, reliability, and automation
  • Act as a catalyst for innovation, spearheading step-change improvements that unlock new opportunities for growth and efficiency
What we offer
What we offer
  • equity and benefits
  • Fulltime
Read More
Arrow Right