CrawlJobs Logo

Cloud & AI Engineer

resmed.com Logo

ResMed

Location Icon

Location:
Ireland , Dublin

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are looking for our future Cloud & AI Engineer to join our Product Development team. Join Resmed: A Global Leader in Digital Health. We’re leading the way in cloud-connected medical devices that transform care for people with sleep apnea, COPD and other chronic diseases. Our comprehensive out-of-hospital software platforms support the professionals and caregivers who help people stay healthy in the home or care setting of their choice. You’ll join a collaborative, fast-moving engineering team building cloud-native, AI-powered solutions in the rapidly evolving Digital Health space. The team works on real-world products that use AWS serverless and GenAI technologies to deliver personalized, intelligent experiences at scale. You are a hands-on engineer who enjoys building, learning, and improving systems. You’re curious about AI, comfortable working in cloud environments, and motivated to deliver high-quality software that makes a real impact. You value teamwork, clear communication, and continuous improvement.

Job Responsibility:

  • Design, develop, test, and maintain software applications that are reliable, scalable, and secure
  • Collaborate with engineers, product managers, designers, and architects to define and implement solutions that meet user and business needs
  • Contribute to system design and technical decision-making in cloud-based, serverless environments
  • Write clean, maintainable, and well-tested code, adhering to team and industry best practices
  • Participate in code reviews, pair programming, and knowledge sharing
  • Investigate, troubleshoot, and resolve software defects and performance issues
  • Assist in the continuous improvement of development processes and delivery pipelines

Requirements:

  • 2–5 years of professional software development experience
  • Proficiency in Python, with a solid understanding of object-oriented design and clean code principles
  • Experience with AWS services, especially serverless (e.g., Lambda, API Gateway, DynamoDB, S3), and infrastructure-as-code tools like Terraform or CloudFormation
  • Contribute to the development of GenAI features, including LLM-based applications and RAG-backed chatbots and implement and improve prompt engineering
  • Strong grasp of RESTful API design, authentication/authorization mechanisms (OAuth2, JWT), and microservices architecture
  • Knowledge of message-brokering systems (e.g., SQS, SNS) and event-driven architectures
  • Experience working with NoSQL (e.g., DynamoDB, MongoDB) and relational databases (e.g., PostgreSQL, MySQL)
  • Exposure to DevOps practices, including CI/CD pipelines, Git, Docker, and monitoring/logging tools (e.g., CloudWatch, Datadog)
  • Understanding of software testing methodologies, including unit, integration, and end-to-end testing (e.g. Cypress)
  • Comfortable working in agile development environments, using tools like Jira, Confluence, and GitHub
  • A collaborative mindset and eagerness to learn, grow, and mentor others
  • A degree in Computer Science, Software Engineering, or a related field or equivalent practical experience is preferred
What we offer:
  • All employees benefit from a bonus plan
  • Competitive benefits
  • Working from home flexibility
  • Access to a referral bonus
  • Access to ResMed's preferred shareholding programme
  • Internal career opportunity - joining an international fast-pace and massively growing company

Additional Information:

Job Posted:
February 18, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Cloud & AI Engineer

Private Cloud AI Customer Engineer

Private Cloud AI Customer Engineer role at Hewlett Packard Enterprise focused on...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Python (hands on with data science libraries preferred)
  • Linux
  • Kubernetes (GPU Scheduling)
  • Containerization (Including repositories, creating container images, etc)
  • Helm
  • AuthN/AuthZ (including SSO)
  • LangChain, LlamaIndex, vLLM
  • RAG Pipelines
  • Storage (object, file)
  • Big Data (structured vs unstructured) and storage solutions (data warehouses, lakes, distributed file systems)
Job Responsibility
Job Responsibility
  • Customer Onboarding: Assist customers in the initial adoption of the HPE PCAI Private Cloud AI product
  • Project manage the customer and use case
  • Provide hands-on support and guidance during the first three months post-purchase
  • Conduct informal product training sessions
  • Customer Engagement: Schedule and conduct regular cadence calls with customers
  • Serve as the primary point of contact for customers during the onboarding phase
  • Use Case Adoption: Guide customers through the initial adoption of their first use case
  • Work closely with customers to understand their specific requirements
  • Collaboration and Coordination: Collaborate with internal teams including Sales, Product Management, and Technical Support
  • Coordinate with multiple stakeholders to address customer needs
What we offer
What we offer
  • Health & Wellbeing: comprehensive suite of benefits that supports physical, financial and emotional wellbeing
  • Personal & Professional Development: specific programs catered to helping you reach career goals
  • Unconditional Inclusion: inclusive work environment that celebrates individual uniqueness
  • Flexibility to manage work and personal needs
  • Fulltime
Read More
Arrow Right

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right

Principal Engineering Manager - Applied AI

We are looking for a Principal Engineering Manager to join our growing Applied A...
Location
Location
United States , Seattle
Salary
Salary:
240870.00 - 297652.00 USD / Year
highspot.com Logo
Highspot
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of experience in Generative AI and Agentic AI systems, including LLMs, context engineering, and modern vector-based retrieval systems
  • 4+ years working as an engineering manager
  • 8+ years working as a professional software developer
  • A great understanding of Generative AI systems, best practices and experience in shipping Agentic AI into distributed, data-intensive production systems
  • Experience developing and operating Cloud services at enterprise scale
  • Strong programming skills in Java, Python, C#, Typescript or equivalent programming language
  • Substantial depth and breadth of management experience to lead and grow an Applied AI team
  • Great collaboration with teams with different backgrounds/expertise/functions
  • Expertise in full product lifecycle
  • technical designs, project planning, iterative implementation, and successful product launches
Job Responsibility
Job Responsibility
  • Lead a team of Applied AI engineers that works at the bleeding edge of Generative AI to solve high-impact business challenges
  • Apply Generative AI to solve hard unsolved challenges in the application of Agentic AI to real-world business challenges
  • Grow, coach, build and scale the Applied AI team
  • Drive operational excellence to achieve enterprise-grade scale, reliability, security, cost-efficiency and performance
  • Drive technical direction for building a safe, scalable and reliable Agentic AI platform for all of Highspot
  • Communicate complex concepts and the results of analyses in a clear and effective manner to technical and non-technical audiences
  • Collaborate with other team members and cross-functionally to share knowledge and discuss initiatives
What we offer
What we offer
  • Comprehensive medical, dental, vision, disability, and life benefits
  • Health Savings Account (HSA) with employer contribution
  • 401(k) Matching with immediate vesting on employer match
  • Flexible PTO
  • 8 paid holidays and 5 paid days for Annual Holiday Week
  • Quarterly Recharge Fridays (paid days off for mental health recharge)
  • 18 weeks paid parental leave
  • Access to Coaches and Therapists through Modern Health
  • 2 volunteer days per year
  • Commuting benefits
  • Fulltime
Read More
Arrow Right

Principal Consulting AI / Data Engineer

As a Principal Consulting AI / Data Engineer, you will design, build, and optimi...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
dyflex.com.au Logo
DyFlex Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven expertise in delivering enterprise-grade data engineering and AI solutions in production environments
  • Strong proficiency in Python and SQL, plus experience with Spark, Airflow, dbt, Kafka, or Flink
  • Experience with cloud platforms (AWS, Azure, or GCP) and Databricks
  • Ability to confidently communicate and present at C-suite level, simplifying technical concepts into business impact
  • Track record of engaging senior executives and influencing strategic decisions
  • Strong consulting and stakeholder management skills with client-facing experience
  • Background in MLOps, ML pipelines, or AI solution delivery highly regarded
  • Degree in Computer Science, Engineering, Data Science, Mathematics, or a related field
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable data and AI solutions using Databricks, cloud platforms, and modern frameworks
  • Lead solution architecture discussions with clients, ensuring alignment of technical delivery with business strategy
  • Present to and influence executive-level stakeholders, including boards, C-suite, and senior directors
  • Translate highly technical solutions into clear business value propositions for non-technical audiences
  • Mentor and guide teams of engineers and consultants to deliver high-quality solutions
  • Champion best practices across data engineering, MLOps, and cloud delivery
  • Build DyFlex’s reputation as a trusted partner in Data & AI through thought leadership and client advocacy
What we offer
What we offer
  • Work with SAP’s latest technologies on cloud as S/4HANA, BTP and Joule, plus Databricks, ML/AI tools and cloud platforms
  • A flexible and supportive work environment including work from home
  • Competitive remuneration and benefits including novated lease, birthday leave, salary packaging, wellbeing programme, additional purchased leave, and company-provided laptop
  • Comprehensive training budget and paid certifications (Databricks, SAP, cloud platforms)
  • Structured career advancement pathways with opportunities to lead large-scale client programs
  • Exposure to diverse industries and client environments, including executive-level engagement
  • Fulltime
Read More
Arrow Right

AI Engineer

In this role you will design and build intelligent, autonomous AI systems that e...
Location
Location
United States , San Diego
Salary
Salary:
199500.00 - 299300.00 USD / Year
teradata.com Logo
Teradata
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field
  • 3–5+ years of experience in software architecture, backend development, or AI infrastructure
  • Strong Python skills and familiarity with Java, Go, and C++
  • Deep expertise in agent development, LLM integration, prompt engineering, runtime systems, and AI tooling
  • Experience with MCP servers, vector databases, RAG systems, graph-based memory, and NLP frameworks
  • Ability to design core agentic capabilities such as memory management, context handling, observability, and identity
  • Strong background in distributed systems, backend services, API design, and cloud-native deployments (AWS, Azure, GCP)
  • Proficiency with containerization, CI/CD pipelines, and scalable production infrastructures
  • Excellent communication skills, documentation habits, and ability to mentor or collaborate across teams
  • Passion for building safe, human-aligned, autonomous systems and extending open-source tools to innovate
Job Responsibility
Job Responsibility
  • Design and build intelligent, autonomous AI systems that enable Teradata to push the boundaries of enterprise-scale agentic technology
  • Lead the development of scalable, secure, cloud-native frameworks that allow AI agents to reason, plan, act, and collaborate in real-world production environments
  • Create the foundational runtime components, automation capabilities, and infrastructure that power next-generation GenAI and Agentic AI solutions
  • Work closely with AI researchers, platform teams, and product leadership to bring advanced agentic capabilities from concept to production across Teradata’s data and AI platform
  • Succeed in this role by enabling enterprise customers to leverage powerful, resilient, and safely governed AI agents that drive measurable business value
What we offer
What we offer
  • Healthcare, life and disability insurance plans
  • 401(k)-retirement savings plan
  • Time-off programs
  • Flexible work model
  • Well-being focus
  • Diversity, Equity, and Inclusion commitment
  • Fulltime
Read More
Arrow Right

AI Engineer

Designs, develops, troubleshoots and debugs software programs for software enhan...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent
  • Typically 2-6 years experience
  • Using software systems design tools and languages
  • Ability to apply analytical and problem solving skills
  • Designing software systems running on multiple platform types
  • Software systems testing methodology, including execution of test plans, debugging, and testing scripts and tools
  • Strong written and verbal communication skills
  • mastery in English and local language
  • Ability to effectively communicate design proposals and negotiate options
Job Responsibility
Job Responsibility
  • Designs limited enhancements, updates, and programming changes for portions and subsystems of systems software
  • Analyzes design and determines coding, programming, and integration activities required
  • Executes and writes portions of testing plans, protocols, and documentation
  • Participates as a member of project team to develop reliable, cost effective and high quality solutions
  • Collaborates and communicates with internal and outsourced development partners
What we offer
What we offer
  • Health & Wellbeing benefits
  • Personal & Professional Development programs
  • Unconditional Inclusion environment
  • Fulltime
Read More
Arrow Right

Consulting AI / Data Engineer

As a Consulting AI / Data Engineer, you will design, build, and optimise enterpr...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
dyflex.com.au Logo
DyFlex Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands-on data engineering experience in production environments
  • Strong proficiency in Python and SQL
  • Experience with at least one additional language (e.g. Java, Typescript/Javascript)
  • Experience with modern frameworks such as Apache Spark, Airflow, dbt, Kafka, or Flink
  • Background in building ML pipelines, MLOps practices, or feature stores is highly valued
  • Proven expertise in relational databases, data modelling, and query optimisation
  • Demonstrated ability to solve complex technical problems independently
  • Excellent communication skills with ability to engage clients and stakeholders
  • Degree in Computer Science, Engineering, Data Science, Mathematics, or a related field
Job Responsibility
Job Responsibility
  • Build and maintain scalable data pipelines for ingesting, transforming, and delivering data
  • Manage and optimise databases, warehouses, and cloud storage solutions
  • Implement data quality frameworks and testing processes to ensure reliable systems
  • Design and deliver cloud-based solutions (AWS, Azure, or GCP)
  • Take technical ownership of project components and lead small development teams
  • Engage directly with clients, translating business requirements into technical solutions
  • Champion best practices including version control, CI/CD, and infrastructure as code
What we offer
What we offer
  • Work with SAP’s latest technologies on cloud as S/4HANA, BTP and Joule, plus Databricks, ML/AI tools and cloud platforms
  • A flexible and supportive work environment including work from home
  • Competitive remuneration and benefits including novated lease, birthday leave, remote working, additional purchased leave, and company-provided laptop
  • Competitive remuneration and benefits including novated lease, birthday leave, salary packaging, wellbeing programme, additional purchased leave, and company-provided laptop
  • Comprehensive training budget and paid certifications (Databricks, SAP, cloud platforms)
  • Structured career advancement pathways with mentoring from senior engineers
  • Exposure to diverse industries and client environments
  • Fulltime
Read More
Arrow Right

Senior Platform Engineer - CI/CD & AI Automation (AI-first)

Groupon is undergoing a critical platform transformation, modernizing its core d...
Location
Location
Czechia , Prague
Salary
Salary:
Not provided
groupon.com Logo
Groupon
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of dedicated experience in Platform Engineering, DevOps, or Infrastructure roles
  • Deep expertise building, scaling, and migrating CI/CD systems, with strong practical experience in Jenkins and/or GitHub Actions
  • Expertise in scripting and automation (Python, Go, or Bash)
  • Solid understanding of container technologies, Kubernetes, and cloud build systems
  • Proven experience leveraging AI tooling (e.g., Claude Code, code analysis) to meaningfully increase developer output and optimize platform work
  • Excellent communication and ability to drive technical decisions across multiple platform and product teams
Job Responsibility
Job Responsibility
  • Platform Transformation: Lead the design, planning, and execution of the Jenkins-to-GitHub Actions migration across a large portfolio of microservices
  • Pipeline Engineering: Design and optimize high-performance, secure, and observable CI/CD workflows across GitHub Actions, Jenkins, and Kubernetes environments
  • AI-First Automation: Drive an AI-First workflow by leveraging tools (e.g., Copilot, code generation) to eliminate infrastructure toil, accelerate development, and analyze pipeline failures
  • Core Automation: Develop robust platform automation (e.g., Python, Go, Bash) to improve build efficiency, artifact caching, reliability, and repository hygiene
  • Security & Compliance: Harden CI/CD infrastructure with robust controls for secrets management, RBAC, audit logging, and secure runner design
  • Observability: Implement and enhance CI/CD observability using tools like Prometheus, Grafana, and OpenTelemetry to provide deep insights into performance and reliability
  • Technical Leadership: Mentor engineers and partner across Cloud, Security, and Developer Experience teams to define and evolve our end-to-end delivery platform architecture
Read More
Arrow Right