CrawlJobs Logo

Sr Data Platform Engineer

amgen.com Logo

Amgen

Location Icon

Location:
India , Hyderabad

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Responsibility:

  • Act as a platform lead for delivery of data platform capabilities that enable next-gen data platform architecture, with a strong focus on Databricks platform and DQ platform features and services
  • Evaluate and enable Databricks platform capabilities through technical assessments and proof‑of‑concepts (PoCs), ensuring alignment with next-gen data platform architectural patterns and enterprise standards
  • Design, build, and productionize reusable platform frameworks, accelerators, and reference implementations that can be leveraged by next-gen data platform delivery teams (excluding ownership of data pipeline architecture or implementation)
  • Enable data governance, metadata layer, and data bundle capabilities by designing and implementing platform‑level integrations between Databricks and Collibra, Amgen’s enterprise data governance platform
  • Build platform‑level tooling and automation to support proactive governance, cost optimization, and best‑practice enforcement across Databricks and related data platform services
  • Define and enable platform observability capabilities, including KPIs, metrics, and telemetry for monitoring performance, usage, reliability, and cost of Databricks services
  • Identify and implement governed self‑service platform capabilities for data engineers through self-service portal, using Python‑based microservices deployed on Docker and Kubernetes
  • Lead user enablement and adoption initiatives, including onboarding content, guided learning experiences, workshops, and best‑practice sharing for the Databricks user community
  • Drive engineering excellence and adoption of AI across platform capabilities and solutions built, promoting modern engineering practices, automation, and responsible use of AI‑driven features
  • Enable key business programs and strategic initiatives by translating initiative‑driven requirements into scalable, reusable data platform capabilities, in alignment with next-gen data platform principles
  • Collaborate closely with Enterprise Data Architecture (EDA), governance, platform operations, and delivery teams to ensure platform capabilities are aligned, consumable, and enterprise‑ready

Requirements:

  • Master’s degree OR Bachelor’s degree in computer science or engineering field and 8 to 13 years of relevant experience
  • Strong hands‑on experience with various capabilities of Databricks, from Compute to Storage and from Unity Catalog to Data Engineering to BI and AI/ML capabilities, with a focus on governance and enterprise enablement
  • Proven hands‑on experience with cloud platforms, with strong preference for AWS (experience with Azure or GCP also acceptable)
  • Experience leading Data Quality platform initiatives (e.g., Ataccama, Monte Carlo), including tool evaluation, implementation, enterprise-wide adoption, and integration with enterprise DQ solutions
  • Experience owning and managing Databricks platform environments, including workspace architecture, environment strategy (dev/test/prod), and lifecycle management at scale
  • Proven ability to establish and enforce platform standards and operating models, including cluster policies, cost management, and workload orchestration frameworks
  • Strong focus on platform enablement and developer experience, including building reusable frameworks, defining best practices, and supporting engineering teams in adopting the platform effectively
  • Exposure to AI/ML capabilities on Databricks, including enabling AI‑driven features or accelerating adoption of AI‑assisted engineering practices
  • Solid knowledge of SQL and relational / dimensional data modelling, sufficient to support platform integrations, governance, and observability use cases
  • Experience working with core AWS services such as EKS, EC2, S3, Lambda, Glue, EMR, RDS, and Redshift/Spectrum, particularly in platform or shared‑services contexts
  • Strong analytical and problem‑solving skills, with the ability to design scalable, reusable solutions for complex data platform challenges
  • Experience working in Agile delivery environments, with exposure to tools such as Jira or Jira Align

Nice to have:

  • Experience contributing to or enabling self‑service platforms or portals (front‑end or API‑driven), including collaboration with front‑end teams (e.g., React‑based portals)
  • Proficiency in Python‑based microservices development, including designing and deploying APIs and services that enable platform capabilities
  • Experience building platform APIs and services with Databricks SDKs and REST APIs for provisioning, managing, or governing Databricks environments and managing workspaces, clusters, jobs, users, permissions, and related platform features
  • Experience with DQplatforms like Ataccama or Monte Carlo used in association with Data Engineering platforms
  • Familiarity with platform observability, telemetry, or cost‑optimization patterns for large‑scale data platforms
  • Experience enabling data governance, metadata, or lineage integrations between data platforms and enterprise governance tools (e.g., Collibra)
  • Experience working with SQL/NoSQL databases and vector databases, particularly in the context of LLM‑enabled or AI‑assisted platform solutions
  • Experience with CI/CD pipelines, containerization (Docker), and Kubernetes/EKS, and applying these practices to enterprise platform services
  • Strong understanding of software engineering best practices, including version control, automated testing, continuous integration, and production‑grade design
  • Certifications (preferred but not required): AWS Certified Data Engineer, Databricks Certification, SAFe Agile Certification

Additional Information:

Job Posted:
May 04, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Sr Data Platform Engineer

Sr Engineer, Data

The Sr Data Engineer designs and develops data architectures in on-premise, clou...
Location
Location
United States , Overland Park
Salary
Salary:
105100.00 - 189600.00 USD / Year
https://www.t-mobile.com Logo
T-Mobile
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree Computer Engineering, Computer Science, a related subject area or equivalent experience
  • 5+ years developing cloud solutions using data series
  • experience with cloud platforms (Amazon Web Services, Azure, or Google Cloud)
  • Hands-on development using and migrating data to cloud platforms
  • Experience in SQL, NoSQL, and/or relational database design and development
  • Advanced knowledge and experience in building complex data pipelines with Python, Experience in languages such as SQL, DAX Python, Java, Scala, and/or Go
Job Responsibility
Job Responsibility
  • Develop data engineering solutions, including data pipelines, visualization and analytical tools
  • Design and develop data architectures in on-premise, cloud and hybrid platforms
  • Data wrangling of heterogeneous data, exploration and discovery in pursuit of new business insights
  • Actively contribute to the team’s knowledge and drive new capabilities forward
  • Mentor other team members in their efforts to build data engineering skillsets
  • Assist team management in defining projects, including helping estimate, plan and scope work
  • Prepare and contribute to presentations required by management
What we offer
What we offer
  • Competitive base salary and compensation package
  • Annual stock grant
  • Employee stock purchase plan
  • 401(k)
  • Access to free, year-round money coaches
  • Medical, dental and vision insurance
  • Flexible spending account
  • Paid time off
  • Up to 12 paid holidays
  • Paid parental and family leave
  • Fulltime
Read More
Arrow Right

Sr. Data Engineer - Snowflake

Data Ideology is seeking a Sr. Snowflake Data Engineer to join our growing team ...
Location
Location
Salary
Salary:
Not provided
dataideology.com Logo
Data Ideology
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience in data engineering, data warehousing, or data architecture
  • 3+ years of hands-on Snowflake experience (performance tuning, data sharing, Snowpark, Snowpipe, etc.)
  • Strong SQL and Python skills, with production experience using dbt
  • Familiarity with cloud platforms (AWS, Azure, or GCP) and modern data tooling (Airflow, Fivetran, Power BI, Looker, Informatica, etc.)
  • Prior experience in a consulting or client-facing delivery role
  • Excellent communication skills, with the ability to collaborate across technical and business stakeholders
  • SnowPro Core Certification required (or willingness to obtain upon hire)
  • advanced Snowflake certifications preferred
Job Responsibility
Job Responsibility
  • Design and build scalable, secure, and cost-effective data solutions in Snowflake
  • Develop and optimize data pipelines using tools such as dbt, Python, CloverDX, and cloud-native services
  • Participate in discovery sessions with clients to gather requirements and translate them into solution designs and project plans
  • Collaborate with engagement managers and account teams to help scope work and provide technical input for Statements of Work (SOWs)
  • Serve as a Snowflake subject matter expert, guiding best practices in performance tuning, cost optimization, access control, and workload management
  • Lead modernization and migration initiatives to move clients from legacy systems into Snowflake
  • Integrate Snowflake with BI tools, governance platforms, and AI/ML frameworks
  • Contribute to internal accelerators, frameworks, and proofs of concept
  • Mentor junior engineers and support knowledge sharing across the team
What we offer
What we offer
  • Flexible Time Off Policy
  • Eligibility for Health Benefits
  • Retirement Plan with Company Match
  • Training and Certification Reimbursement
  • Utilization Based Incentive Program
  • Commission Incentive Program
  • Referral Bonuses
  • Work from Home
  • Fulltime
Read More
Arrow Right

Sr. Data Engineer

We are looking for a Sr. Data Engineer to join our growing Quality Engineering t...
Location
Location
Salary
Salary:
Not provided
dataideology.com Logo
Data Ideology
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Information Systems, or a related field (or equivalent experience)
  • 5+ years of experience in data engineering, data warehousing, or data architecture
  • Expert-level experience with Snowflake, including data modeling, performance tuning, security, and migration from legacy platforms
  • Hands-on experience with Azure Data Factory (ADF) for building, orchestrating, and optimizing data pipelines
  • Strong experience with Informatica (PowerCenter and/or IICS) for ETL/ELT development, workflow management, and performance optimization
  • Deep knowledge of data modeling techniques (dimensional, tabular, and modern cloud-native patterns)
  • Proven ability to translate business requirements into scalable, high-performance data solutions
  • Experience designing and supporting end-to-end data pipelines across cloud and hybrid architectures
  • Strong proficiency in SQL and experience optimizing large-scale analytic workloads
  • Experience working within SDLC frameworks, CI/CD practices, and version control
Job Responsibility
Job Responsibility
  • Ability to collect and understand business requirements and translate those requirements into data models, integration strategies, and implementation plans
  • Lead modernization and migration initiatives to move clients from legacy systems into Snowflake, ensuring functionality, performance and data integrity
  • Ability to work within the SDLC framework in multiple environments and understand the complexities and dependencies of the data warehouse
  • Optimize and troubleshoot ETL/ELT workflows, applying best practices for scheduling, orchestration, and performance tuning
  • Maintain documentation, architecture diagrams, and migration plans to support knowledge transfer and project tracking
What we offer
What we offer
  • PTO Policy
  • Eligibility for Health Benefits
  • Retirement Plan
  • Work from Home
  • Fulltime
Read More
Arrow Right

Sr Data Engineer

(Locals or Nearby resources only). You will work with technologies that include ...
Location
Location
United States , Glendale
Salary
Salary:
Not provided
enormousenterprise.com Logo
Enormous Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of data engineering experience developing large data pipelines
  • Proficiency in at least one major programming language (e.g. Python, Java, Scala)
  • Hands-on production environment experience with distributed processing systems such as Spark
  • Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
  • Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query)
  • Experience in developing APIs with GraphQL
  • Advance understanding of OLTP vs OLAP environments
  • Candidates must work W2, no Corp 2 Corp
  • US Citizen, Green Card Holder, H4-EAD, TN-Visa
  • Airflow
Job Responsibility
Job Responsibility
  • Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
  • Build and maintain APIs to expose data to downstream applications
  • Develop real-time streaming data pipelines
  • Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
  • Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more
  • Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams)
What we offer
What we offer
  • 3 levels of medical insurance for you and your family
  • Dental insurance for you and your family
  • 401k
  • Overtime
  • Sick leave policy: accrue 1 hour for every 30 hours worked up to 48 hours
Read More
Arrow Right

Sr. Data Engineer

We are looking for a skilled Sr. Data Engineer to join our team in Oklahoma City...
Location
Location
United States , Oklahoma City
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience with Snowflake data warehousing and schema design
  • proficiency in ETL tools such as Matillion or similar platforms
  • strong knowledge of Python and PowerShell for data automation
  • experience working with Microsoft SQL Server and related technologies
  • familiarity with cloud technologies, particularly AWS
  • understanding of data visualization and analytics tools
  • background in working with big data technologies such as Apache Kafka, Hadoop, Spark, or Pig
  • ability to design and implement APIs for data integration and management.
Job Responsibility
Job Responsibility
  • Design, implement, and maintain Snowflake data warehousing solutions to support business needs
  • assist in the migration of in-house data to Snowflake, ensuring a seamless transition
  • develop data pipelines and workflows using tools such as Matillion or equivalent ETL solutions
  • collaborate with teams to optimize and manage the existing data warehouse built on Microsoft SQL Server
  • utilize Python and PowerShell to automate data processes and enhance system efficiency
  • partner with the implementation team to shadow and learn best practices for Snowflake deployment
  • ensure data integrity, scalability, and security across all data engineering processes
  • provide insights into data visualization and analytics to support decision-making
  • work with cloud technologies, including AWS, to enhance data storage and accessibility
  • implement and manage APIs to enable seamless data integration and sharing.
What we offer
What we offer
  • Medical, vision, dental, and life and disability insurance
  • eligibility to enroll in 401(k) plan
  • access to competitive compensation and free online training.
  • Fulltime
Read More
Arrow Right

Software Engineer Sr Staff - Platforms Developer

Designs, develops, troubleshoots and debugs software programs for software enhan...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, electronics, telecommunication engineering, or a related discipline
  • 14 to 19 years of experience in networking and system software development
  • Proficiency in C and C++ programming
  • Familiarity with data structures and system debugging techniques
  • Expertise in Host Complex, System Peripherals & Drivers: CPU complex (x86)
  • PCIe, SPI, I2C, MDIO
  • FPGA, CPLD, Flash Drivers
  • Expertise in Ethernet Interfaces (ranging from 1Gig to 400G+, including 800G, 1.6T), MacSec, Timing, Optics (SFP, QSFP, QDD, OSFP)
  • Expertise in High-speed packet forwarding with network processors, PHYs, and SerDes
  • Cloud Architectures
Job Responsibility
Job Responsibility
  • Collaborate with product managers, architects, and other engineers to define software requirements and specifications
  • Design, implement, and maintain networking and system software components using C and C++ programming languages
  • Conduct object-oriented analysis and design to ensure robust and scalable solutions
  • Debug complex system-level issues, leveraging your deep understanding of fundamental OS concepts (especially in Linux or similar operating systems)
  • Participate in hardware and system-level design discussions, ensuring carrier-class software development
  • Work with Linux device drivers, system bring-up, and the Linux kernel
  • Navigate large codebases effectively
  • Apply strong technical, analytical, and problem-solving skills to enhance software performance and resilience
  • Utilize scripting technologies and modern DevOps practices
  • Collaborate with cross-functional teams, including networking, embedded platform software, and hardware experts
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Sr. Staff ML Platform Engineer

Machine learning is the crucial enabler for every financial service that EarnIn ...
Location
Location
United States , Mountain View
Salary
Salary:
360000.00 - 440000.00 USD / Year
earnin.com Logo
EarnIn
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or Master’s degree in Computer Science, Engineering, or a related field
  • 8+ years of industry machine learning experience and excellent software engineering skills
  • Strong programming skills in Python, with familiarity in ML frameworks such as TensorFlow or PyTorch
  • Experience with ML cloud platforms such as AWS Sagemaker, Databricks, or GCP Vertex AI
  • Familiarity with data pipelines and workflow management tools
  • Strong communication and collaboration skills
  • Passion for learning and staying updated with the latest industry trends in machine learning and platform engineering
Job Responsibility
Job Responsibility
  • Design, build, and maintain a robust ML platform and tooling ecosystem that supports the entire machine learning lifecycle, from experimentation to production
  • Lead and mentor a team of ML engineers, deeply understanding their workflows to streamline model training, deployment, and monitoring, while ensuring reproducibility and consistency of results
  • Drive scalability, reliability, and cost efficiency of the ML platform, balancing performance with ease of use for scientists and engineers
  • Evaluate and adopt emerging technologies to continually advance the organization’s machine learning capabilities and maintain a competitive edge
  • Champion operational excellence, setting a high bar for engineering quality, reliability, and automation
  • Act as a catalyst for innovation, spearheading step-change improvements that unlock new opportunities for growth and efficiency
What we offer
What we offer
  • equity and benefits
  • Fulltime
Read More
Arrow Right

Sr. Staff Software Engineer - Advanced Analytics Platform

At DISQO, we’re redefining how companies turn data into decisions. Our mission i...
Location
Location
United States , Los Angeles, Glendale
Salary
Salary:
200000.00 - 240000.00 USD / Year
disqo.com Logo
DISQO
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12+ years of professional software engineering experience
  • 5+ years architecting or building high-performance data systems or analytics platforms
  • 3+ years of product Rust experience
  • Deep expertise in Rust and strong experience in Java
  • Proven track record building large-scale data analytics or OLAP systems from the ground up
  • Deep understanding of columnar data engines, vectorized execution, and query/dataframe optimization
  • Hands-on experience with performance engineering, profiling, and hardware-aware optimization
  • Strong expertise with AWS - designing, deploying, and optimizing large-scale data and compute systems in the cloud
  • A systems-thinking mindset
  • Thrives in a fast-moving, startup environment
Job Responsibility
Job Responsibility
  • Architect and deliver a high-performance Advanced Analytics Engine
  • Design and build an Agentic AI system that leverages this Advanced Analytics Engine
  • Partner with product, engineering and data teams to power agentic AI analytics systems
  • Profile, benchmark, and optimize Rust components
  • Leverage AWS cloud services to architect scalable, reliable, and cost-efficient analytics infrastructure
  • Shape the evolution of DISQO’s broader data platform and its integration across our product ecosystem
  • Mentor and guide engineers
  • Contribute to open-source or internal frameworks that advance analytical systems and distributed computation
What we offer
What we offer
  • 100% covered Medical/Dental/Vision for employee
  • Equity
  • 401K
  • Generous PTO policy
  • Flexible workplace policy
  • Team offsites, social events & happy hours
  • Life Insurance
  • Health FSA
  • Commuter FSA (for hybrid employees)
  • Catered lunch and fully stocked kitchen
  • Fulltime
Read More
Arrow Right