CrawlJobs Logo

Lead Data Analytics Engineer

resmed.com Logo

ResMed

Location Icon

Location:
United States , San Diego

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

171000.00 - 257000.00 USD / Year

Job Description:

Lead Data Analytics Engineer role at ResMed. A Senior Staff–level technical leadership role for someone who can set direction, define patterns, solve complex architectural challenges, and elevate our data engineering capabilities across the organization. Shape the next generation of ResMed’s data ecosystem.

Job Responsibility:

  • Set architectural strategy for data modeling, transformation, ingestion, and data products, and guide engineering best practices across teams
  • Lead analytics engineering by designing high-quality Snowflake/dbt models, establishing governance and testing standards, and mentoring engineers in scalable modeling and system design
  • Build and evolve data pipelines using Python, Spark, APIs, connector frameworks, and other ingestion technologies, introducing automation, observability, and resilient design patterns
  • Collaborate cross-functionally with product, engineering, and data science to shape impactful, scalable solutions
  • Drive future advanced analytics and ML capabilities by defining feature pipelines, supporting classical ML models, and enabling new AI-driven workloads including LLM-based and hybrid ML/AI architectures

Requirements:

  • Bachelor’s degree in a STEM field or equivalent experience
  • Extensive hands-on experience as a senior IC in data engineering, analytics engineering, or data architecture (typically 8+ years)
  • Expert-level SQL and data modeling skills on large-scale platforms (Snowflake preferred)
  • Strong experience building production data pipelines and models using Python, cloud services, and modern data stack tools
  • Proficiency with dbt or similar transformation frameworks
  • Demonstrated ability to set technical direction, define architectural patterns, and establish engineering best practices
  • Solid experience with Git/GitHub workflows, including branching strategies and collaborative development
  • Experience building and maintaining CI/CD pipelines in GitHub Actions, including automated testing and secure deployments
  • Ability to operate across both analytics engineering and data engineering responsibilities
  • Experience with cloud platforms such as AWS or GCP

Nice to have:

  • Experience with Dagster, Airflow, or similar orchestration tools
  • Familiarity with streaming or event-based processing (Kafka, Fink, Kinesis)
  • Familiarity with IaC such as Terraform
  • Experience supporting ML/AI workflows or integrating ML into data products
  • Master’s degree in a STEM field
  • Prior experience as a Staff or Senior Staff-level engineer
What we offer:

comprehensive medical, vision, dental, and life, AD&D, short-term and long-term disability insurance, sleep care management, Health Savings Account (HSA), Flexible Spending Account (FSA), commuter benefits, 401(k), Employee Stock Purchase Plan (ESPP), Employee Assistance Program (EAP), tuition assistance, flexible time off (FTO), 11 paid holidays plus 3 floating days, eligible for 14 weeks of primary caregiver or two weeks of secondary caregiver leave

Additional Information:

Job Posted:
February 18, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Lead Data Analytics Engineer

Data Engineering Lead

Data Engineering Lead a strategic professional who stays abreast of developments...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data
  • Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting
  • Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data
  • Fulltime
Read More
Arrow Right

Big Data / Scala / Python Engineering Lead

The Applications Development Technology Lead Analyst is a senior level position ...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least two years (Over all 10+ hands on Data Engineering experience) of experience building and leading highly complex, technical data engineering teams
  • Lead data engineering team, from sourcing to closing
  • Drive strategic vision for the team and product
  • Experience managing an data focused product, ML platform
  • Hands on experience relevant experience in design, develop, and optimize scalable distributed data processing pipelines using Apache Spark and Scala
  • Experience managing, hiring and coaching software engineering teams
  • Experience with large-scale distributed web services and the processes around testing, monitoring, and SLAs to ensure high product quality
  • 7 to 10+ years of hands-on experience in big data development, focusing on Apache Spark, Scala, and distributed systems
  • Proficiency in Functional Programming: High proficiency in Scala-based functional programming for developing robust and efficient data processing pipelines
  • Proficiency in Big Data Technologies: Strong experience with Apache Spark, Hadoop ecosystem tools such as Hive, HDFS, and YARN
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Fulltime
Read More
Arrow Right

Data Engineering Lead

The Engineering Lead Analyst is a senior level position responsible for leading ...
Location
Location
Singapore , Singapore
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness
  • Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions
  • Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations
What we offer
What we offer
  • Equal opportunity employer commitment
  • Accessibility and accommodation support
  • Global workforce benefits
  • Fulltime
Read More
Arrow Right

Director, Data Engineering & Analytics

We are seeking a proven Data and Analytics leader to run our data team. This rol...
Location
Location
United States , Washington, DC
Salary
Salary:
165000.00 - 295625.00 USD / Year
arcadia.com Logo
Arcadia
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Expert Data & Analytics leader with demonstrated experience processing and analyzing large-scale datasets (billions of records)
  • Deep expertise with Snowflake as a data platform, including performance optimization, cost management, and architecting for scale
  • Hands-on experience with modern data stack: dbt for transformation, Hex for analytics, and Fivetran/Airbyte for data ingestion
  • Have built and led Data & Analytics teams at high-growth SaaS companies, specifically those dealing with high-volume data processing
  • Experience with utility data, billing systems, or similar high-volume transactional data is highly valued
  • 12+ years in the workforce with significant experience in data-intensive environments
  • Top-notch technical skills covering both data and quantitative techniques: data facility, descriptive analytics, and predictive modeling
  • SQL and Python are a must, with demonstrated ability to write optimized queries for large-scale data processing
  • Experience with data governance, security, and compliance in handling sensitive customer data
Job Responsibility
Job Responsibility
  • Build, lead, and scale a successful data organization
  • Oversee the processing and analysis of several million utility bills per month, ensuring data pipeline reliability, accuracy, and scalability
  • Ensure data quality and that we are building our products to give our customers the insight they need
  • Build a multi-year strategy around data infrastructure, enterprise data modeling, and processing capabilities
  • Build a framework for data investments that ensures we are appropriately balancing R&D with products that deliver strong return on investment
  • Lead the optimization and evolution of our Snowflake-based data architecture to handle exponential data growth
  • Own the enterprise unified data model and architecture that will power all of Arcadia’s applications and use cases
What we offer
What we offer
  • competitive benefits and equity component to the package
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

We are seeking an experienced Senior Data Engineer to lead the development of a ...
Location
Location
India , Kochi; Trivandrum
Salary
Salary:
Not provided
experionglobal.com Logo
Experion Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years experience in data engineering with analytical platform development focus
  • Proficiency in Python and/or PySpark
  • Strong SQL skills for ETL processes and large-scale data manipulation
  • Extensive AWS experience (Glue, Lambda, Step Functions, S3)
  • Familiarity with big data systems (AWS EMR, Apache Spark, Apache Iceberg)
  • Database experience with DynamoDB, Aurora, Postgres, or Redshift
  • Proven experience designing and implementing RESTful APIs
  • Hands-on CI/CD pipeline experience (preferably GitLab)
  • Agile development methodology experience
  • Strong problem-solving abilities and attention to detail
Job Responsibility
Job Responsibility
  • Architect, develop, and maintain end-to-end data ingestion framework for extracting, transforming, and loading data from diverse sources
  • Use AWS services (Glue, Lambda, EMR, ECS, EC2, Step Functions) to build scalable, resilient automated data pipelines
  • Develop and implement automated data quality checks, validation routines, and error-handling mechanisms
  • Establish comprehensive monitoring, logging, and alerting systems for data quality issues
  • Architect and develop secure, high-performance APIs for data services integration
  • Create thorough API documentation and establish standards for security, versioning, and performance
  • Work with business stakeholders, data scientists, and operations teams to understand requirements
  • Participate in sprint planning, code reviews, and agile ceremonies
  • Contribute to CI/CD pipeline development using GitLab
Read More
Arrow Right

Lead Data Engineer

As a Lead Data Engineer or architect at Made Tech, you'll play a pivotal role in...
Location
Location
United Kingdom , Any UK Office Hub (Bristol / London / Manchester / Swansea)
Salary
Salary:
80000.00 - 96000.00 GBP / Year
madetech.com Logo
Made Tech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in Git (inc. Github Actions) and able to explain the benefits of different branch strategies
  • Strong experience in IaC and able to guide how one could deploy infrastructure into different environments
  • Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop
  • Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes)
  • Ability to create data pipelines on a cloud environment and integrate error handling within these pipelines
  • You understand how to create reusable libraries to encourage uniformity or approach across multiple data pipelines
  • Able to document and present end-to-end diagrams to explain a data processing system on a cloud environment
  • Some knowledge of how you would present diagrams (C4, UML, etc.)
  • Enthusiasm for learning and self-development
  • You have experience of working on agile delivery-lead projects and can apply agile practices such as Scrum, XP, Kanban
Job Responsibility
Job Responsibility
  • Define, shape and perfect data strategies in central and local government
  • Help public sector teams understand the value of their data, and make the most of it
  • Establish yourself as a trusted advisor in data driven approaches using public cloud services like AWS, Azure and GCP
  • Contribute to our recruitment efforts and take on line management responsibilities
  • Help implement efficient data pipelines & storage
What we offer
What we offer
  • 30 days of paid annual leave
  • Flexible parental leave options
  • Part time remote working for all our staff
  • Paid counselling as well as financial and legal advice
  • 7% employer matched pension
  • Flexible benefit platform which includes a Smart Tech scheme, Cycle to work scheme, and an individual benefits allowance which you can invest in a Health care cash plan or Pension plan
  • Optional social and wellbeing calendar of events
  • Fulltime
Read More
Arrow Right

Data & Analytics Engineer

As a Data & Analytics Engineer with MojoTech you will work with our clients to s...
Location
Location
United States
Salary
Salary:
90000.00 - 150000.00 USD / Year
mojotech.com Logo
MojoTech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience in Data Engineering, Data Science, Data Warehousing
  • Strong experience in Python
  • Experience building and maintaining ETL/ELT pipelines, data warehouses, or real-time analytics systems
  • BA/BS in Computer Science, Data Science, Engineering, or a related field or equivalent experience in data engineering or analytics
  • Track record of developing and optimizing scalable data solutions and larger-scale data initiatives
  • Strong understanding of best practices in data management, including sustainment, governance, and compliance with data quality and security standards
  • Commitment to continuous learning and sharing knowledge with the team
Job Responsibility
Job Responsibility
  • Work with our clients to solve complex problems and to deliver high quality solutions as part of a team
  • Collaborating with product managers, designers, and clients, you will lead discussions to define data requirements and deliver actionable insights and data pipelines to support client analytics needs
What we offer
What we offer
  • Performance based end of year bonus
  • Medical, Dental, FSA
  • 401k with 4% match
  • Trust-based time off
  • Catered lunches when in office
  • 5 hours per week dedicated to self-directed learning, innovation projects, or skill development
  • Dog Friendly Offices
  • Paid conference attendance/yearly education stipend
  • Custom workstation
  • 6 weeks parental leave
  • Fulltime
Read More
Arrow Right

Data Engineering & Analytics Lead

Premium Health is seeking a highly skilled, hands-on Data Engineering & Analytic...
Location
Location
United States , Brooklyn
Salary
Salary:
Not provided
premiumhealth.org Logo
Premium Health
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred
  • Proven track record and progressively responsible experience in data engineering, data architecture, or related technical roles
  • healthcare experience preferred
  • Strong knowledge of data engineering principles, data integration, ETL processes, and semantic mapping techniques and best practices
  • Experience implementing data quality management processes, data governance frameworks, cataloging, and master data management concepts
  • Familiarity with healthcare data standards (e.g., HL7, FHIR, etc), health information management principles, and regulatory requirements (e.g., HIPAA)
  • Understanding of healthcare data, including clinical, operational, and financial data models, preferred
  • Advanced proficiency in SQL, data modeling, database design, optimization, and performance tuning
  • Experience designing and integrating data from disparate systems into harmonized data models or semantic layers
  • Hands-on experience with modern cloud-based data platforms (e.g Azure, AWS, GCP)
Job Responsibility
Job Responsibility
  • Collaborate with the CDIO and Director of Technology to define a clear data vision aligned with the organization's goals and execute the enterprise data roadmap
  • Serve as a thought leader for data engineering and analytics, guiding the evolution of our data ecosystem and championing data-driven decision-making across the organization
  • Build and mentor a small data team, providing technical direction and performance feedback, fostering best practices and continuous learning, while remaining a hands-on implementor
  • Define and implement best practices, standards, and processes for data engineering, analytics, and data management across the organization
  • Design, implement, and maintain a scalable, reliable, and high-performing modern data infrastructure, aligned with the organizational needs and industry best practices
  • Architect and maintain data lake/lakehouse, warehouse, and related platform components to support analytics, reporting, and operational use cases
  • Establish and enforce data architecture standards, governance models, naming conventions ,and documentation
  • Develop, optimize, and maintain scalable ETL/ELT pipelines and data workflows to collect, transform, normalize, and integrate data from diverse systems
  • Implement robust data quality processes, validation, monitoring, and error-handling frameworks
  • Ensure data is accurate, timely, secure, and ready for self-service analytics and downstream applications
What we offer
What we offer
  • Paid Time Off, Medical, Dental and Vision plans, Retirement plans
  • Public Service Loan Forgiveness (PSLF)
  • Fulltime
Read More
Arrow Right