CrawlJobs Logo

Senior Data Analytics & AI Engineer

resmed.com Logo

ResMed

Location Icon

Location:
Ireland , Dublin

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Global Technology Solutions (GTS) at ResMed is a division dedicated to creating innovative, scalable, and secure platforms and services for patients, providers, and people across ResMed. The primary goal of GTS is to accelerate well-being and growth by transforming the core, enabling patient, people, and partner outcomes, and building future-ready operations. We are looking for a high-impact, hands-on Senior Data Analytics & AI Engineer to shape the next generation of Resmed’s data ecosystem. This is a Senior level technical role for someone who can collaborate with peers to solve complex data pipeline, data models, and classical and generative AI challenges.

Job Responsibility:

  • Design data and AI products and applications consisting of system integrations, data model transformations and user interfaces while aligning with engineering best practices across teams
  • Build and evolve data pipelines using Python, Spark, APIs, connector frameworks, and other ingestion technologies while considering automation, observability, and resilient design patterns
  • Design high-quality Snowflake/dbt models, implement governance and testing standards, and mentoring less experienced engineers in scalable modeling and system design
  • Collaborate cross-functionally with product, engineering, and data science to shape impactful, scalable solutions
  • Drive future advanced analytics and ML capabilities by defining feature pipelines, supporting classical ML models, and enabling new AI-driven workloads including LLM-based and hybrid ML/AI architectures

Requirements:

  • Extensive hands-on experience as a senior IC in data engineering, analytics engineering, or data architecture (typically 6+ years)
  • Expert-level SQL and data modeling skills on large-scale platforms (Snowflake preferred)
  • Strong experience building production data pipelines and models using Python, cloud services, and modern data stack tools
  • Proficiency with dbt or similar transformation frameworks
  • Demonstrated ability to set technical direction, define architectural patterns, and establish engineering best practices
  • Solid experience with Git/GitHub workflows, including branching strategies and collaborative development
  • Experience building and maintaining CI/CD pipelines in GitHub Actions, including automated testing and secure deployments
  • Ability to operate across both analytics engineering and data engineering responsibilities
  • Experience with cloud platforms such as AWS or GCP

Nice to have:

  • Bachelor’s degree in a STEM field or equivalent experience or Master’s degree in a computer science or engineer field
  • Experience with Dagster, Airflow, or similar orchestration tools
  • Familiarity with streaming or event-based processing (Kafka, Fink, Kinesis)
  • Familiarity with IaC such as Terraform
  • Experience with classical machine learning, advanced analytics, and generative AI development
  • Experience supporting ML/AI workflows or integrating ML into data products
What we offer:
  • Bonus plan
  • Working from home flexibility
  • Referral bonus
  • Preferred shareholding programme
  • Competitive benefits (Pension, Long-Term Illness Protection, Health Insurance...)
  • Internal career opportunity

Additional Information:

Job Posted:
February 18, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Analytics & AI Engineer

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior AI Data Engineer

We are looking for a Senior AI Data Engineer to join an exciting project for our...
Location
Location
Poland , Warsaw
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Data Science, Artificial Intelligence, or a related field
  • Several years of experience in AI and Machine Learning development, preferably in Customer Care solutions
  • Strong proficiency in Python and NLP frameworks
  • Hands-on experience with Azure AI services (e.g., Azure Machine Learning, Cognitive Services, Bot Services)
  • Solid understanding of cloud architectures and microservices on Azure
  • Experience with CI/CD pipelines and MLOps
  • Excellent leadership and communication skills
  • Analytical mindset with strong problem-solving abilities
  • Polish and English at a minimum B2 level.
Job Responsibility
Job Responsibility
  • Lead the development and implementation of AI-powered features for a Customer Care platform
  • Design and deploy Machine Learning and NLP models to automate customer inquiries
  • Collaborate with DevOps and cloud architects to ensure a high-performance, scalable, and secure Azure-based architecture
  • Optimize AI models to enhance customer experience
  • Integrate Conversational AI, chatbots, and language models into the platform
  • Evaluate emerging technologies and best practices in Artificial Intelligence
  • Mentor and guide a team of AI/ML developers.
What we offer
What we offer
  • Flexible working hours
  • Hybrid work model, allowing employees to divide their time between home and modern offices in key Polish cities
  • A cafeteria system that allows employees to personalize benefits by choosing from a variety of options
  • Generous referral bonuses, offering up to PLN6,000 for referring specialists
  • Additional revenue sharing opportunities for initiating partnerships with new clients
  • Ongoing guidance from a dedicated Team Manager for each employee
  • Tailored technical mentoring from an assigned technical leader, depending on individual expertise and project needs
  • Dedicated team-building budget for online and on-site team events
  • Opportunities to participate in charitable initiatives and local sports programs
  • A supportive and inclusive work culture with an emphasis on diversity and mutual respect.
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role at UpGuard supporting analytics teams to extract insig...
Location
Location
Australia , Sydney; Melbourne; Brisbane; Hobart
Salary
Salary:
Not provided
https://www.upguard.com Logo
UpGuard
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience with data sourcing, storage and modelling to effectively deliver business value right through to BI platform
  • AI first mindset and experience scaling an Analytics and BI function at another SaaS business
  • Experience with Looker (Explores, Looks, Dashboards, Developer interface, dimensions and measures, models, raw SQL queries)
  • Experience with CloudSQL (PostgreSQL) and BigQuery (complex queries, indices, materialised views, clustering, partitioning)
  • Experience with Containers, Docker and Kubernetes (GKE)
  • Familiarity with n8n for automation
  • Experience with programming languages (Go for ETL workers)
  • Comfortable interfacing with various APIs (REST+JSON or MCP Server)
  • Experience with version control via GitHub and GitHub Flow
  • Security-first mindset
Job Responsibility
Job Responsibility
  • Design, build, and maintain reliable data pipelines to consolidate information from various internal systems and third-party sources
  • Develop and manage comprehensive semantic layer using technologies like LookML, dbt or SQLMesh
  • Implement and enforce data quality checks, validation rules, and governance processes
  • Ensure AI agents have access to necessary structured and unstructured data
  • Create clear, self-maintaining documentation for data models, pipelines, and semantic layer
What we offer
What we offer
  • Great Place to Work certified company
  • Equal Employment Opportunity and Affirmative Action employer
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer to design, develop, and optimize data platforms, pipelines,...
Location
Location
United States , Chicago
Salary
Salary:
160555.00 - 176610.00 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master's degree in Engineering Management, Software Engineering, Computer Science, or a related technical field
  • 3 years of experience in data engineering
  • Experience building data platforms and pipelines
  • Experience with AWS, GCP or Azure
  • Experience with SQL and Python for data manipulation, transformation, and automation
  • Experience with Apache Airflow for workflow orchestration
  • Experience with data governance, data quality, data lineage and metadata management
  • Experience with real-time data ingestion tools including Pub/Sub, Kafka, or Spark
  • Experience with CI/CD pipelines for continuous deployment and delivery of data products
  • Experience maintaining technical records and system designs
Job Responsibility
Job Responsibility
  • Design, develop, and optimize data platforms, pipelines, and governance frameworks
  • Enhance business intelligence, analytics, and AI capabilities
  • Ensure accurate data flows and push data-driven decision-making across teams
  • Write product-grade performant code for data extraction, transformations, and loading (ETL) using SQL/Python
  • Manage workflows and scheduling using Apache Airflow and build custom operators for data ETL
  • Build, deploy and maintain both inbound and outbound data pipelines to integrate diverse data sources
  • Develop and manage CI/CD pipelines to support continuous deployment of data products
  • Utilize Google Cloud Platform (GCP) tools, including BigQuery, Composer, GCS, DataStream, and Dataflow, for building scalable data systems
  • Implement real-time data ingestion solutions using GCP Pub/Sub, Kafka, or Spark
  • Develop and expose REST APIs for sharing data across teams
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Annual incentive program
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We're seeking an experienced Senior Data Engineer to help shape the future of he...
Location
Location
Germany , Berlin
Salary
Salary:
Not provided
audibene.de Logo
Audibene GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of hands on experience with complex ETL processes, data modeling, and large scale data systems
  • Production experience with modern cloud data warehouses (Snowflake, BigQuery, Redshift) on AWS, GCP, or Azure
  • Proficiency in building and optimizing data transformations and pipelines in python
  • Experience with columnar storage, MPP databases, and distributed data processing architectures
  • Ability to translate complex technical concepts for diverse audiences, from engineers to business stakeholders
  • Experience with semantic layers, data catalogs, or metadata management systems
  • Familiarity with modern analytical databases like Snowflake, BigQuery, ClickHouse, DuckDB, or similar systems
  • Experience with streaming technologies like Kafka, Pulsar, Redpanda, or Kinesis
Job Responsibility
Job Responsibility
  • Design and build robust, high performance data pipelines using our modern stack (Airflow, Snowflake, Pulsar, Kubernetes) that feed directly into our semantic layer and data catalog
  • Create data products optimized for consumption by AI agents and LLMs where data quality, context, and semantic richness are crucial
  • Structure and transform data to be inherently machine readable, with rich metadata and clear lineage that powers intelligent applications
  • Take responsibility from raw data ingestion through to semantic modeling, ensuring data is not just accurate but contextually rich and agent ready
  • Champion best practices in building LLM consumable data products, optimize for both human and machine consumers, and help evolve our dbt transformation layer
  • Built data products for AI/LLM consumption, not just analytics dashboards
What we offer
What we offer
  • Work 4 days a week from our office (Berlin/Mainz) with a passionate team, and 1 day a week from home
  • Regularly join on- and offline team events, company off-sites, and the annual audibene Wandertag
  • Cost of the Deutschland-Ticket covered
  • Access to over 50,000 gyms and wellness facilities through Urban Sports Club
  • Support for personal development with a wide range of programs, trainings, and coaching opportunities
  • Dog-friendly office
  • Fulltime
Read More
Arrow Right

Senior Analytics Engineer, Business Intelligence

As CLEAR continues to scale, we’re deepening our investment in the data ecosyste...
Location
Location
United States , New York
Salary
Salary:
165000.00 - 200000.00 USD / Year
clearme.com Logo
Clear
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience working in analytics engineering, BI engineering, or data engineering roles within a modern cloud data warehouse environment
  • Expert-level SQL: you can structure models for clarity, reuse, and performance at scale
  • Proven experience working with modern data stack technologies, including Snowflake, dbt, Dagster, and Looker
  • Deep experience with BI tools including semantic layer design, metric standardization, and the enablement of AI-driven analytics
  • Strong understanding of dimensional modeling and data warehousing principles
  • Demonstrated ability to balance technical rigor with business impact designing models that are as intuitive for stakeholders as they are efficient under the hood
  • Track record of driving BI and self-service, creating environments where data users can confidently explore and analyze on their own
  • Strong communication and influence skills, able to partner effectively across technical and business domains to align on data strategy and definitions
  • Proactive ownership and curiosity, always seeking opportunities to simplify, standardize, and scale how data is modeled and delivered
Job Responsibility
Job Responsibility
  • Own and evolve the presentation and semantic layers: design, build, and optimize data models that serve as the foundation for Analytics at CLEAR
  • Partner with data engineering teams to define the transformation logic (via dbt or similar frameworks) that connects raw data to consumable business views, optimize pipelines to improve query performance, and ensure models align with best practices
  • Own a roadmap for enabling AI-powered analytics driving improved data documentation and a path for exposing our semantic layer to AI models
  • Improve self-service rates by building intuitive data structures, reusable views, and clear metric definitions that empower teams to answer their own questions
  • Mentor analysts and analytics engineers, guiding them in data modeling best practices, BI design, and stakeholder engagement
What we offer
What we offer
  • Comprehensive healthcare plans
  • Family-building benefits (fertility and adoption/surrogacy support)
  • Flexible time off
  • Annual wellness stipend
  • Free OneMedical memberships for you and your dependents
  • A CLEAR Plus membership
  • A 401(k) retirement plan with employer match
  • Catered lunches
  • Fully stocked kitchens
  • Stipends and reimbursement programs for well-being and learning & development
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

The Data Engineer is responsible for designing, building, and maintaining robust...
Location
Location
Germany , Berlin
Salary
Salary:
Not provided
ibvogt.com Logo
ib vogt GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Data Engineering, or related field
  • 5+ years of experience in data engineering or similar roles
  • experience in renewable energy, engineering, or asset-heavy industries is a plus
  • Strong experience with modern data stack (e.g., PowerPlatform, Azure Data Factory, Databricks, Airflow, dbt, Synapse, Snowflake, BigQuery, etc.)
  • Proficiency in Python and SQL for data transformation and automation
  • Experience with APIs, message queues (Kafka, Event Hub), data streaming and knowledge of data lakehouse and data warehouse architectures
  • Familiarity with CI/CD pipelines, DevOps practices, and containerization (Docker, Kubernetes)
  • Understanding of cloud environments (preferably Microsoft Azure, PowerPlatform)
  • Strong analytical mindset and problem-solving attitude paired with a structured, detail-oriented, and documentation-driven work style
  • Team-oriented approach and excellent communication skills in English
Job Responsibility
Job Responsibility
  • Design, implement, and maintain efficient ETL/ELT data pipelines connecting internal systems (M365, Sharepoint, ERP, CRM, SCADA, O&M, etc.) and external data sources
  • Integrate structured and unstructured data from multiple sources into the central data lake / warehouse / Dataverse
  • Build data models and transformation workflows to support analytics, reporting, and AI/ML use cases
  • Implement data quality checks, validation rules, and metadata management according to the company’s data governance framework
  • Automate workflows, optimize performance, and ensure scalability of data pipelines and processing infrastructure
  • Work closely with Data Scientists, Software Engineers, and Domain Experts to deliver reliable datasets for Digital Twin and AI applications
  • Maintain clear documentation of data flows, schemas, and operational processes
What we offer
What we offer
  • Competitive remuneration and motivating benefits
  • Opportunity to shape the data foundation of ib vogt’s digital transformation journey
  • Work on cutting-edge data platforms supporting real-world renewable energy assets
  • A truly international working environment with colleagues from all over the world
  • An open-minded, collaborative, dynamic, and highly motivated team
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Kiddom is redefining how technology powers learning. We combine world-class curr...
Location
Location
United States , San Francisco
Salary
Salary:
150000.00 - 220000.00 USD / Year
kiddom.co Logo
Kiddom
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience as a data engineer
  • 8+ years of software engineering experience (including data engineering)
  • Proven experience as a Data Engineer or in a similar role with strong data modeling, architecture, and design skills
  • Strong understanding of data engineering principles including infrastructure deployment, governance and security
  • Experience with MySQL, Snowflake, Cassandra and familiarity with Graph databases. (Neptune or Neo4J)
  • Proficiency in SQL, Python, (Golang)
  • Proficient with AWS offerings such as AWS Glue, EKS, ECS and Lambda
  • Excellent communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders
  • Strong understanding of PII compliance and best practices in data handling and storage
  • Strong problem-solving skills, with a knack for optimizing performance and ensuring data integrity and accuracy
Job Responsibility
Job Responsibility
  • Design, implement, and maintain the organization’s data infrastructure, ensuring it meets business requirements and technical standards
  • Deploy data pipelines to AWS infrastructure such as EKS, ECS, Lambdas and AWS Glue
  • Develop and deploy data pipelines to clean and transform data to support other engineering teams, analytics and AI applications
  • Extract and deploy reusable features to Feature stores such as Feast or equivalent
  • Evaluate and select appropriate database technologies, tools, and platforms, both on-premises and in the cloud
  • Monitor data systems and troubleshoot issues related to data quality, performance, and integrity
  • Work closely with other departments, including Product, Engineering, and Analytics, to understand and cater to their data needs
  • Define and document data workflows, pipelines, and transformation processes for clear understanding and knowledge sharing
What we offer
What we offer
  • Meaningful equity
  • Health insurance benefits: medical (various PPO/HMO/HSA plans), dental, vision, disability and life insurance
  • One Medical membership (in participating locations)
  • Flexible vacation time policy (subject to internal approval). Average use 4 weeks off per year
  • 10 paid sick days per year (pro rated depending on start date)
  • Paid holidays
  • Paid bereavement leave
  • Paid family leave after birth/adoption. Minimum of 16 paid weeks for birthing parents, 10 weeks for caretaker parents. Meant to supplement benefits offered by State
  • Commuter and FSA plans
  • Fulltime
Read More
Arrow Right