CrawlJobs Logo

Airflow Reliability Engineer

astronomer.io Logo

Astronomer

Location Icon

Location:
India , Hyderabad

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

As an Airflow Reliability Engineer on the Customer Reliability Engineering (CRE) team at Astronomer, you will have the opportunity to become an Apache Airflow expert, learning directly from leaders of the Airflow project. You’ll provide Apache Airflow expertise directly to customers to help them make the best possible use of our managed Airflow service. CRE is Astronomer’s support team. Because our customers are sophisticated organizations who need and expect high levels of expertise to help them keep mission critical uses of Apache Airflow working consistently, we look a little different from most support teams. Nearly every ticket you will work requires an intersection of strong technical knowledge and customer empathy to understand what the customer needs and how to get them there. Every day is a new challenge and a new thing to learn. This role is based in Hyderabad and requires working in shifts, typically early morning or evening IST; the exact schedule will be set during hiring.

Job Responsibility:

  • Learn and build expertise across several software engineering disciplines, including: Airflow and data engineering, Kubernetes, Cloud Engineering
  • Gain exposure to the big picture
  • learn about product, engineering, customer relationship management, and more
  • Solve challenging Airflow problems for our customers
  • Spend up to 20% of your time on side projects that contribute to Astronomer’s overall success
  • Work on a modern, sophisticated, cloud-native product
  • Work directly with our customers’ data engineers, system admins, DevOps teams, and management
  • Provide feedback from your experience that can shape the direction of the Airflow project
  • Own the customer experience, working directly with customers to prioritize and solve issues, meet SLAs, and provide “white glove” guidance
  • Participate remotely within a fully distributed team
  • Help maintain 24x7 coverage through a specified 6-hour pager period during your work day
  • Participate in paid on-call rotation for weekend coverage

Requirements:

  • 5 years of professional experience (any industry)
  • 3 years of experience with Python
  • 1+ year with Apache Airflow
  • Experience with Kubernetes/Docker/Container
  • Customer Support experience
  • Experience working with a distributed system with any major cloud provider (AWS, GCP, Azure)
  • Problem-solving and troubleshooting abilities
  • Work well with autonomy and independence
  • Strong written and verbal communication for connecting with our customers over our ticketing system and through Zoom

Nice to have:

  • Familiarity with SQL and PostgreSQL
  • Familiarity with Databricks, Snowflake, Redshift, dbt, or other similar data engineering tools

Additional Information:

Job Posted:
February 01, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Airflow Reliability Engineer

Site Reliability Engineer SRE – ML platform

Location
Location
United States , Sunnyvale
Salary
Salary:
Not provided
thirdeyedata.ai Logo
Thirdeye Data
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in ML Ops with strong knowledge in Kubernetes, Python, MongoDB and AWS
  • Good understanding of Apache SOLR
  • Proficient with Linux administration
  • Knowledge of ML models and LLM
  • Ability to understand tools used by data scientists and experience with software development and test automation
  • Ability to design and implement cloud solutions and ability to build MLOps pipelines on cloud solutions (AWS)
  • Experience working with cloud computing and database systems
  • Experience building custom integrations between cloud-based systems using APIs
  • Experience developing and maintaining ML systems built with open-source tools
  • Experience with MLOps Frameworks like Kubeflow, MLFlow, DataRobot, Airflow etc., experience with Docker and Kubernetes
Job Responsibility
Job Responsibility
  • Continuous Deployment using GitHub Actions, Flux, Kustomize
  • Design and implement cloud solutions, build MLOps on AWS cloud
  • Data science model containerization, deployment using Docker, VLLM, Kubernetes
  • Communicate with a team of data scientists, data engineers, and architects, and document the processes
  • Develop and deploy scalable tools and services for our clients to handle machine learning training and inference
  • Knowledge of ML models and LLM
  • Fulltime
Read More
Arrow Right

Managed Airflow Platform (MAP) Support Engineer

Location
Location
Salary
Salary:
Not provided
kloud9.nyc Logo
Kloud9
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science or a related field
  • 3+ years of experience in large-scale production-grade platform support, including participation in on-call rotations
  • 3+ years of hands-on experience with cloud platforms like AWS, Azure, or GCP
  • 2+ years of experience developing and supporting data pipelines using Apache Airflow including DAG lifecycle management and scheduling best practices
  • Troubleshooting task failures, scheduler issues, performance bottlenecks managing and error handling
  • Strong programming proficiency in Python, especially for developing and troubleshooting RESTful APIs
  • 1+ years of experience in observability using the ELK stack (Elasticsearch, Logstash, Kibana) or Grafana Stack
  • 2+ years of experience with DevOps and Infrastructure-as-Code tools such as GitHub, Jenkins, Docker, and Terraform
  • 2+ years of hands-on experience with Kubernetes, including managing and debugging cluster resources and workloads within Amazon EKS
  • Exposure to Agile and test-driven development a plus
Job Responsibility
Job Responsibility
  • Evangelize and cultivate adoption of Global Platforms, open-source software and agile principles within the organization
  • Ensure solutions are designed and developed using a scalable, highly resilient cloud native architecture
  • Ensure the operational stability, performance, and scalability of cloud-native platforms through proactive monitoring and timely issue resolution
  • Diagnose infrastructure and system issues across cloud environments and Kubernetes clusters, and lead efforts in troubleshooting and remediation
  • Collaborate with engineering and infrastructure teams to manage configurations, resource tuning, and platform upgrades without disrupting business operations
  • Maintain clear, accurate runbooks, support documentation, and platform knowledge bases to enable faster onboarding and incident response
  • Support observability initiatives by improving logging, metrics, dashboards, and alerting frameworks
  • Advocate for operational excellence and drive continuous improvement in system reliability, cost-efficiency, and maintainability
  • Work with product management to support product / service scoping activities
  • Work with leadership to define delivery schedules of key features through an agile framework
What we offer
What we offer
  • Kloud9 provides a robust compensation package and a forward-looking opportunity for growth in emerging fields
Read More
Arrow Right

Staff Data Engineer

We are seeking a Staff Data Engineer to architect and lead our entire data infra...
Location
Location
United States , New York; San Francisco
Salary
Salary:
170000.00 - 210000.00 USD / Year
taskrabbit.com Logo
Taskrabbit
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7-10 years of experience in Data Engineering
  • Expertise in building and maintaining ELT data pipelines using modern tools such as dbt, Airflow, and Fivetran
  • Deep experience with cloud data warehouses such as Snowflake, BigQuery, or Redshift
  • Strong data modeling skills (e.g., dimensional modeling, star/snowflake schemas) to support both operational and analytical workloads
  • Proficient in SQL and at least one general-purpose programming language (e.g., Python, Java, or Scala)
  • Experience with streaming data platforms (e.g., Kafka, Kinesis, or equivalent) and real-time data processing patterns
  • Familiarity with infrastructure-as-code tools like Terraform and DevOps practices for managing data platform components
  • Hands-on experience with BI and semantic layer tools such as Looker, Mode, Tableau, or equivalent
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable, reliable data pipelines and infrastructure to support analytics, operations, and product use cases
  • Develop and evolve dbt models, semantic layers, and data marts that enable trustworthy, self-serve analytics across the business
  • Collaborate with non-technical stakeholders to deeply understand their business needs and translate them into well-defined metrics and analytical tools
  • Lead architectural decisions for our data platform, ensuring it is performant, maintainable, and aligned with future growth
  • Build and maintain data orchestration and transformation workflows using tools like Airflow, dbt, and Snowflake (or equivalent)
  • Champion data quality, documentation, and observability to ensure high trust in data across the organization
  • Mentor and guide other engineers and analysts, promoting best practices in both data engineering and analytics engineering disciplines
What we offer
What we offer
  • Employer-paid health insurance
  • 401k match with immediate vesting
  • Generous and flexible time off with 2 company-wide closure weeks
  • Taskrabbit product stipends
  • Wellness + productivity + education stipends
  • IKEA discounts
  • Reproductive health support
  • Fulltime
Read More
Arrow Right

Senior Data Engineer - Platform Enablement

SoundCloud empowers artists and fans to connect and share through music. Founded...
Location
Location
United States , New York; Atlanta; East Coast
Salary
Salary:
160000.00 - 210000.00 USD / Year
soundcloud.com Logo
SoundCloud
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience in data engineering, analytics engineering, or similar roles
  • Expert-level SQL skills, including performance tuning, advanced joins, CTEs, window functions, and analytical query design
  • Proven experience with Apache Airflow (designing DAGs, scheduling, task dependencies, monitoring, Python)
  • Familiarity with event-driven architectures and messaging systems (Pub/Sub, Kafka, etc.)
  • Knowledge of data governance, schema management, and versioning best practices
  • Understanding observability practices: logging, metrics, tracing, and incident response
  • Experience deploying and managing services in cloud environments, preferably GCP, AWS
  • Excellent communication skills and a collaborative mindset
Job Responsibility
Job Responsibility
  • Develop and optimize SQL data models and queries for analytics, reporting, and operational use cases
  • Design and maintain ETL/ELT workflows using Apache Airflow, ensuring reliability, scalability, and data integrity
  • Collaborate with analysts and business teams to translate data needs into efficient, automated data pipelines and datasets
  • Own and enhance data quality and validation processes, ensuring accuracy and completeness of business-critical metrics
  • Build and maintain reporting layers, supporting dashboards and analytics tools (e.g. Looker, or similar)
  • Troubleshoot and tune SQL performance, optimizing queries and data structures for speed and scalability
  • Contribute to data architecture decisions, including schema design, partitioning strategies, and workflow scheduling
  • Mentor junior engineers, advocate for best practices and promote a positive team culture
What we offer
What we offer
  • Comprehensive health benefits including medical, dental, and vision plans, as well as mental health resources
  • Robust 401k program
  • Employee Equity Plan
  • Generous professional development allowance
  • Creativity and Wellness benefit
  • Flexible vacation and public holiday policy where you can take up to 35 days of PTO annually
  • 16 paid weeks for all parents (birthing and non-birthing), regardless of gender, to welcome newborns, adopted and foster children
  • Various snacks, goodies, and 2 free lunches weekly when at the office
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Data Engineering tea...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 7+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms
  • Experience with Databricks, Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
Job Responsibility
Job Responsibility
  • Help our stakeholder teams ingest data faster into our data lake
  • Find ways to make our data pipelines more efficient
  • Come up with ideas to help instigate self-serve data engineering within the company
  • Apply your strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake
  • Take vague requirements and transform them into solid solutions
  • Solve challenging problems, where creativity is as crucial as your ability to write code and test cases
What we offer
What we offer
  • Health and wellbeing resources
  • Paid volunteer days
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Go-To Market Data En...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms, micro services, and REST APIs
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
Job Responsibility
Job Responsibility
  • Help our stakeholder teams ingest data faster into our data lake
  • Make our data pipelines more efficient
  • Build micro-services, architect, design, and enable self-serve capabilities at scale
  • Work on an AWS-based data lake backed by open source projects such as Spark and Airflow
  • Identify ways to make our platform better and improve user experience
  • Apply strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Data Engineer

Data Engineer role at Airtable, the no-code app platform. The position involves ...
Location
Location
United States , San Francisco
Salary
Salary:
179500.00 - 221500.00 USD / Year
airtable.com Logo
Airtable
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience designing, creating and maintaining scalable data pipelines, preferably in Airflow
  • Proficiency in at least one programming language (preferably Python)
  • Highly effective with SQL and understand how to write and tune complex queries
  • Experience wrangling data and understanding complex data systems
  • Passionate and thoughtful about building systems that enhance human understanding
  • Communicate with clarity and precision in written form
  • Experience communicating with graphs and plots
Job Responsibility
Job Responsibility
  • Work between engineering organization and stakeholders to understand data needs and produce pipelines, data marts, and other data solutions
  • Design and update foundational business tables to simplify analysis across the entire company
  • Improve the performance and reliability of the data warehouse
  • Build and enforce a pattern language across the data stack to ensure data pipelines and tables are consistent, accurate, and well-understood
What we offer
What we offer
  • Benefits
  • Restricted stock units
  • Incentive compensation
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Go-To Market Data En...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 5+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms, micro services, and REST APIs
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
Job Responsibility
Job Responsibility
  • Help our stakeholder teams ingest data faster into our data lake
  • Make our data pipelines more efficient
  • Build micro-services
  • Architect, design, and enable self-serve capabilities at scale
  • Apply your strong technical experience building highly reliable services
  • Manage and orchestrate a multi-petabyte scale data lake
  • Transform vague requirements into solid solutions
  • Solve challenging problems creatively
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right