This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a Senior Data Platform Engineer to help build and scale the next generation of ResMed’s data ecosystem. This is a senior individual contributor role for an engineer who combines strong software engineering fundamentals with deep data engineering and analytics experience. You will design, build, and operate reliable, scalable data systems that power analytics, data products, and advanced AI/ML use cases across the organization.
Job Responsibility:
Design, build, and maintain scalable data pipelines for ingestion, transformation, and delivery using Python, SQL, Spark, APIs, and modern cloud-native tools
Develop high-quality analytics and data models in Snowflake using dbt or similar frameworks, with a focus on performance, correctness, and maintainability
Apply strong software engineering practices to data systems, including modular design, testing, code reviews, and version control
Implement automation, monitoring, and observability to ensure reliable and resilient data pipelines in production
Collaborate closely with product managers, analytics engineers, data scientists, and application engineers to deliver data products that drive business and clinical outcomes
Support advanced analytics and ML use cases by building feature pipelines and data foundations for classical ML models and emerging AI-driven workloads
Contribute to shared standards, patterns, and best practices across the data engineering organization through hands-on contributions and technical collaboration
Requirements:
Bachelor’s degree in a STEM field or equivalent practical experience
Significant hands-on experience as a data engineer or senior software engineer working on data-intensive systems (typically 5–8+ years)
Strong SQL expertise and experience with data modeling on large-scale analytical platforms (Snowflake preferred)
Proven experience building and operating production data pipelines using Python and cloud services
Proficiency with dbt or similar transformation and analytics engineering tools
Solid software engineering fundamentals, including system design, debugging, performance optimization, and maintainable code practices
Experience with Git/GitHub workflows, including pull requests, code reviews, and collaborative development
Hands-on experience building or working with CI/CD pipelines (GitHub Actions preferred), including automated testing and deployments
Ability to work effectively across both data engineering and analytics engineering responsibilities
Strong hands-on experience building and operating data systems on AWS, including designing cloud-native architectures and working with services such as S3, IAM, EC2/ECS/EKS, Lambda, Glue, EMR, or related AWS data and compute services
Nice to have:
Experience with workflow orchestration tools such as Dagster, Airflow, or similar
Familiarity with streaming or event-driven systems (Kafka, Flink, Kinesis)
Experience supporting ML/AI workflows or integrating ML models into data products
Master’s degree in a STEM field
Prior experience working in healthcare, regulated environments, or large-scale enterprise data platforms
What we offer:
comprehensive medical, vision, dental, and life, AD&D, short-term and long-term disability insurance, sleep care management, Health Savings Account (HSA), Flexible Spending Account (FSA), commuter benefits, 401(k), Employee Stock Purchase Plan (ESPP), Employee Assistance Program (EAP), and tuition assistance
Employees accrue fifteen days Paid Time Off (PTO) in their first year of employment, receive 11 paid holidays plus 3 floating days and are eligible for 14 weeks of primary caregiver or two weeks of secondary caregiver leave when welcoming new family members