CrawlJobs Logo

Managed Airflow Platform (MAP) Support Engineer

kloud9.nyc Logo

Kloud9

Location Icon

Location:

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Responsibility:

  • Evangelize and cultivate adoption of Global Platforms, open-source software and agile principles within the organization
  • Ensure solutions are designed and developed using a scalable, highly resilient cloud native architecture
  • Ensure the operational stability, performance, and scalability of cloud-native platforms through proactive monitoring and timely issue resolution
  • Diagnose infrastructure and system issues across cloud environments and Kubernetes clusters, and lead efforts in troubleshooting and remediation
  • Collaborate with engineering and infrastructure teams to manage configurations, resource tuning, and platform upgrades without disrupting business operations
  • Maintain clear, accurate runbooks, support documentation, and platform knowledge bases to enable faster onboarding and incident response
  • Support observability initiatives by improving logging, metrics, dashboards, and alerting frameworks
  • Advocate for operational excellence and drive continuous improvement in system reliability, cost-efficiency, and maintainability
  • Work with product management to support product / service scoping activities
  • Work with leadership to define delivery schedules of key features through an agile framework
  • Be a key contributor to overall architecture, framework and design of global platforms

Requirements:

  • Bachelor’s or Master’s degree in Computer Science or a related field
  • 3+ years of experience in large-scale production-grade platform support, including participation in on-call rotations
  • 3+ years of hands-on experience with cloud platforms like AWS, Azure, or GCP
  • 2+ years of experience developing and supporting data pipelines using Apache Airflow including DAG lifecycle management and scheduling best practices
  • Troubleshooting task failures, scheduler issues, performance bottlenecks managing and error handling
  • Strong programming proficiency in Python, especially for developing and troubleshooting RESTful APIs
  • 1+ years of experience in observability using the ELK stack (Elasticsearch, Logstash, Kibana) or Grafana Stack
  • 2+ years of experience with DevOps and Infrastructure-as-Code tools such as GitHub, Jenkins, Docker, and Terraform
  • 2+ years of hands-on experience with Kubernetes, including managing and debugging cluster resources and workloads within Amazon EKS
  • Exposure to Agile and test-driven development a plus
  • Experience delivering projects in a highly collaborative, multi-disciplined development team environment

Nice to have:

  • Exposure to Agile, ideally a strong background with the SAFe methodology
  • Working knowledge of Node.js is considered an added advantage
  • Skill set on any monitoring or observability tool will be a value add
What we offer:

Kloud9 provides a robust compensation package and a forward-looking opportunity for growth in emerging fields

Additional Information:

Job Posted:
December 09, 2025

Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Managed Airflow Platform (MAP) Support Engineer

Senior Data Engineer

Adswerve is looking for a Senior Data Engineer to join our Adobe Services team. ...
Location
Location
United States
Salary
Salary:
130000.00 - 155000.00 USD / Year
adswerve.com Logo
Adswerve, Inc.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience)
  • 5+ years of experience in a data engineering, analytics, or marketing technology role
  • Hands-on expertise in Adobe Experience Platform (AEP), Real-Time CDP, Journey Optimizer, or similar tools is a big plus
  • Strong proficiency in SQL and hands-on experience with data transformation and modeling
  • Understanding of ETL/ELT workflows (e.g., dbt, Fivetran, Airflow, etc.) and cloud data platforms (e.g., GCP, Snowflake, AWS, Azure)
  • Experience with ingress/egress patterns and interacting with API’s to move data
  • Experience with Python, or JavaScript in a data or scripting context
  • Experience with customer data platforms (CDPs), event-based tracking, or customer identity management
  • Understanding of Adobe Experience Cloud integrations (e.g., Adobe Analytics, Target, Campaign) is a plus
  • Strong communication skills with the ability to lead technical conversations and present to both technical and non-technical audiences
Job Responsibility
Job Responsibility
  • Lead the end-to-end architecture of data ingestion and transformation in Adobe Experience Platform (AEP) using Adobe Data Collection (Tags), Experience Data Model (XDM), and source connectors
  • Design and optimize data models, identity graphs, and segmentation strategies within Real-Time CDP to enable personalized customer experiences
  • Implement schema mapping, identity resolution, and data governance strategies
  • Collaborate with Data Architects to build scalable, reliable data pipelines across multiple systems
  • Conduct data quality assessments and support QA for new source integrations and activations
  • Write and maintain internal documentation and knowledge bases on AEP best practices and data workflows
  • Simplify complex technical concepts and educate team members and clients in a clear, approachable way
  • Contribute to internal knowledge sharing and mentor junior engineers in best practices around data modeling, pipeline development, and Adobe platform capabilities
  • Stay current on the latest Adobe Experience Platform features and data engineering trends to inform client strategies
What we offer
What we offer
  • Medical, dental and vision available for employees
  • Paid time off including vacation, sick leave & company holidays
  • Paid volunteer time
  • Flexible working hours
  • Summer Fridays
  • “Work From Home Light” days between Christmas and New Year’s Day
  • 401(k) Plan with 5% company match and no vesting period
  • Employer Paid Parental Leave
  • Health-care Spending Accounts
  • Dependent-care Spending Accounts
  • Fulltime
Read More
Arrow Right

Data Engineer (Production Support) for AWS EMR

We are seeking a highly skilled and motivated Data Engineer specializing in Prod...
Location
Location
China , Shangai
Salary
Salary:
Not provided
nttdata.com Logo
NTT DATA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in managing AWS services, particularly EMR, S3, Lambda, Step Functions, and CloudWatch
  • Hands-on experience with distributed data processing frameworks like Apache Spark, Hive, or Presto
  • Experience on Kafka, NiFi, Amazon Web Service (AWS), Maven, Ambari-TEZ, Stash and Bamboo
  • Familiarity with data loading tools like Talend, Sqoop
  • Familiarity with cloud database like AWS Redshift, Aurora MySQL and PostgreSQL
  • Knowledge of workflow/schedulers like Oozie or Apache AirFlow
  • Strong knowledge of Shell Scripting, python or Java for scripting and automation
  • Familiarity with SQL and query optimization techniques
  • Experience in production support for large-scale distributed systems or data platforms
  • Ability to analyze logs, diagnose issues, and implement fixes in high-pressure scenarios
Job Responsibility
Job Responsibility
  • Monitor, troubleshoot, and resolve issues in real-time for AWS EMR clusters and associated data pipelines
  • Investigate and debug data processing failures, latency issues, and performance bottlenecks
  • Provide support for mission-critical production systems as part of an on-call rotation
  • Manage AWS EMR cluster lifecycle, including creation, scaling, termination, and optimization
  • Ensure effective resource utilization and cost optimization of clusters
  • Apply patches and upgrades to EMR clusters and software components as needed
  • Maintain and support ETL/ELT pipelines built on tools such as Apache Spark, Hive, or Presto running on EMR
  • Ensure data quality, consistency, and availability across pipelines and storage systems like S3, Redshift, Mysql or Snowflake
  • Implement and monitor automated workflows using AWS tools like Step Functions, Lambda, and CloudWatch
  • Analyze and optimize EMR job performance by tuning Spark/Hive configurations and improving query efficiency
  • Fulltime
Read More
Arrow Right

Data Engineer (Production Support) for AWS EMR

We are seeking a highly skilled and motivated Data Engineer specializing in Prod...
Location
Location
China , Shangai
Salary
Salary:
Not provided
nttdata.com Logo
NTT DATA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in managing AWS services, particularly EMR, S3, Lambda, Step Functions, and CloudWatch
  • Hands-on experience with distributed data processing frameworks like Apache Spark, Hive, or Presto
  • Experience on Kafka, NiFi, Amazon Web Service (AWS), Maven, Ambari-TEZ, Stash and Bamboo
  • Familiarity with data loading tools like Talend, Sqoop
  • Familiarity with cloud database like AWS Redshift, Aurora MySQL and PostgreSQL
  • Knowledge of workflow/schedulers like Oozie or Apache AirFlow
  • Strong knowledge of Shell Scripting, python or Java for scripting and automation
  • Familiarity with SQL and query optimization techniques
  • Experience in production support for large-scale distributed systems or data platforms
  • Ability to analyze logs, diagnose issues, and implement fixes in high-pressure scenarios
Job Responsibility
Job Responsibility
  • Monitor, troubleshoot, and resolve issues in real-time for AWS EMR clusters and associated data pipelines
  • Investigate and debug data processing failures, latency issues, and performance bottlenecks
  • Provide support for mission-critical production systems as part of an on-call rotation
  • Manage AWS EMR cluster lifecycle, including creation, scaling, termination, and optimization
  • Ensure effective resource utilization and cost optimization of clusters
  • Apply patches and upgrades to EMR clusters and software components as needed
  • Maintain and support ETL/ELT pipelines built on tools such as Apache Spark, Hive, or Presto running on EMR
  • Ensure data quality, consistency, and availability across pipelines and storage systems like S3, Redshift,Mysql or Snowflake
  • Implement and monitor automated workflows using AWS tools like Step Functions, Lambda, and CloudWatch
  • Analyze and optimize EMR job performance by tuning Spark/Hive configurations and improving query efficiency
  • Fulltime
Read More
Arrow Right

Data Engineer (Production Support) for AWS EMR

We are seeking a highly skilled and motivated Data Engineer specializing in Prod...
Location
Location
China , Shangai
Salary
Salary:
Not provided
nttdata.com Logo
NTT DATA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in managing AWS services, particularly EMR, S3, Lambda, Step Functions, and CloudWatch
  • Hands-on experience with distributed data processing frameworks like Apache Spark, Hive, or Presto
  • Experience on Kafka, NiFi, Amazon Web Service (AWS), Maven, Ambari-TEZ, Stash and Bamboo
  • Familiarity with data loading tools like Talend, Sqoop
  • Familiarity with cloud database like AWS Redshift, Aurora MySQL and PostgreSQL
  • Knowledge of workflow/schedulers like Oozie or Apache AirFlow
  • Strong knowledge of Shell Scripting, python or Java for scripting and automation
  • Familiarity with SQL and query optimization techniques
  • Experience in production support for large-scale distributed systems or data platforms
  • Ability to analyze logs, diagnose issues, and implement fixes in high-pressure scenarios
Job Responsibility
Job Responsibility
  • Monitor, troubleshoot, and resolve issues in real-time for AWS EMR clusters and associated data pipelines
  • Investigate and debug data processing failures, latency issues, and performance bottlenecks
  • Provide support for mission-critical production systems as part of an on-call rotation
  • Manage AWS EMR cluster lifecycle, including creation, scaling, termination, and optimization
  • Ensure effective resource utilization and cost optimization of clusters
  • Apply patches and upgrades to EMR clusters and software components as needed
  • Maintain and support ETL/ELT pipelines built on tools such as Apache Spark, Hive, or Presto running on EMR
  • Ensure data quality, consistency, and availability across pipelines and storage systems like S3, Redshift,Mysql or Snowflake
  • Implement and monitor automated workflows using AWS tools like Step Functions, Lambda, and CloudWatch
  • Analyze and optimize EMR job performance by tuning Spark/Hive configurations and improving query efficiency
Read More
Arrow Right

Data Platform Engineer

Location
Location
Canada
Salary
Salary:
Not provided
myticas.com Logo
Myticas Consulting
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Geospatial Data Engineering: Experience designing and maintaining scalable data pipelines for spatial datasets and geospatial analytics workloads
  • Python Data Engineering Stack: Strong Python experience using libraries such as Pandas, NumPy, SQLAlchemy, pytest, and other data engineering tools
  • Geospatial Libraries & Tooling: Hands-on experience with GeoPandas, Rasterio, Xarray, rioxarray, QGIS, or similar spatial processing tools
  • Spatial Databases: Expertise working with PostgreSQL/PostGIS or other spatially enabled databases for large geospatial datasets
  • Workflow Orchestration: Experience with pipeline orchestration tools such as Airflow, DBT, or similar data workflow frameworks
  • Cloud Data Platforms: Experience deploying and managing data pipelines within AWS or comparable cloud infrastructure environments
  • Containerized Data Workflows: Familiarity with Docker and version control systems (Git) for managing reproducible data engineering environments
  • Geospatial Data Integration: Experience ingesting and harmonizing multi-source geospatial data (public datasets, sensor data, satellite or environmental datasets)
  • Data Quality & Validation: Experience implementing data validation, testing, and quality assurance processes within data pipelines
  • Geospatial Analytics & Visualization: Ability to support spatial analysis, mapping workflows, and geospatial insight generation for technical and non-technical stakeholders
Read More
Arrow Right

Data Engineer (Production Support) for AWS EMR

The Data Engineer (Production Support) role focuses on managing AWS EMR clusters...
Location
Location
China , Shangai
Salary
Salary:
Not provided
nttdata.com Logo
NTT DATA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in managing AWS services, particularly EMR, S3, Lambda, Step Functions, and CloudWatch
  • Hands-on experience with distributed data processing frameworks like Apache Spark, Hive, or Presto
  • Experience on Kafka, NiFi, Amazon Web Service (AWS), Maven, Ambari-TEZ, Stash and Bamboo
  • Familiarity with data loading tools like Talend, Sqoop
  • Familiarity with cloud database like AWS Redshift, Aurora MySQL and PostgreSQL
  • Knowledge of workflow/schedulers like Oozie or Apache AirFlow
  • Strong knowledge of Shell Scripting, python or Java for scripting and automation
  • Familiarity with SQL and query optimization techniques
  • Experience in production support for large-scale distributed systems or data platforms
  • Ability to analyze logs, diagnose issues, and implement fixes in high-pressure scenarios
Job Responsibility
Job Responsibility
  • Monitor, troubleshoot, and resolve issues in real-time for AWS EMR clusters and associated data pipelines
  • Investigate and debug data processing failures, latency issues, and performance bottlenecks
  • Provide support for mission-critical production systems as part of an on-call rotation
  • Manage AWS EMR cluster lifecycle, including creation, scaling, termination, and optimization
  • Ensure effective resource utilization and cost optimization of clusters
  • Apply patches and upgrades to EMR clusters and software components as needed
  • Maintain and support ETL/ELT pipelines built on tools such as Apache Spark, Hive, or Presto running on EMR
  • Ensure data quality, consistency, and availability across pipelines and storage systems like S3, Redshift, Mysql or Snowflake
  • Implement and monitor automated workflows using AWS tools like Step Functions, Lambda, and CloudWatch
  • Analyze and optimize EMR job performance by tuning Spark/Hive configurations and improving query efficiency
Read More
Arrow Right

Principal Data And Analytics Engineer

The Principal Data and Analytics Engineer holds comprehensive responsibility for...
Location
Location
United States
Salary
Salary:
108086.00 - 180144.00 USD / Year
oreillyauto.com Logo
O'Reilly Auto Parts
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience architecting enterprise-scale data platforms and ecosystems, including hybrid and cloud-native environments (e.g., GCP BigQuery, Snowflake, Iceberg, Advanced SQL, Erwin, dbt, Kafka, Alation, Collibra)
  • Deep expertise in designing and scaling highly available, secure, and fault-tolerant batch and streaming pipelines with strong emphasis on cost optimization, observability, and latency control
  • Advanced proficiency in semantic modeling, reusable data asset design, and cross-functional data product delivery aligned to medallion architecture
  • Leadership in implementing CI/CD-enabled pipelines, RBAC frameworks, schema evolution strategies, and interoperable data exchange using Iceberg or equivalent table formats
  • Ownership of organization-wide metrics store and semantic layers, ensuring consistency, governance, and performance across reporting, AI, and ML use cases
  • Advanced expertise in programming languages such as Python, Scala, with the ability to architect complex data solutions
  • Demonstrated leadership in designing and overseeing the implementation of scalable, idempotent workflows using orchestration frameworks such as Airflow and Prefect
  • Demonstrated ability to translate business transformation goals into scalable data solutions and reusable patterns
  • Deep understanding of business processes, KPIs, and capability maps across functions such as supply chain, customer, store ops, and finance
  • Proven experience in driving cross-functional data product prioritization, influencing senior stakeholders, and quantifying impact of data initiatives
Job Responsibility
Job Responsibility
  • Help define and evolve enterprise data engineering blueprints, including data mesh, medallion architecture, and hybrid cloud data platforms
  • Set strategic direction for data platforms, tools, and services (e.g., Snowflake, GCP BigQuery, dbt, Kafka, Airflow/Prefect) in alignment with future-state architecture and business priorities
  • Architect and design highly scalable, resilient, cost optimal and secure data platforms
  • Lead the design and implementation of next-generation data platforms, ensuring fault tolerance, high availability, and optimal performance for petabyte-scale data
  • Establish and enforce organization-wide best practices for data pipeline development, CI/CD for data workflows, automated deployment playbooks, and robust rollback strategies
  • Lead technology evaluation and adoption, proactively researching, evaluating, and championing the integration of cutting-edge data technologies, frameworks, and methodologies
  • Define and scale enterprise knowledge management frameworks that ensure consistent documentation, discoverability, and reusability of data assets across domains
  • Establish and govern standards for metadata management, data lineage, architectural diagrams, and runbooks
  • Lead the design of federated governance models that empower domain-aligned teams to operate autonomously while conforming to centralized policies, frameworks and playbooks
  • Collaborate with data governance, compliance, and security teams to operationalize policy-as-code frameworks for data retention, access control, and PII handling
What we offer
What we offer
  • Competitive Wages & Paid Time Off
  • Stock Purchase Plan & 401k with Employer Contributions Starting Day One
  • Medical, Dental, & Vision Insurance with Optional Flexible Spending Account (FSA)
  • Team Member Health/Wellbeing Programs
  • Tuition Educational Assistance Programs
  • Opportunities for Career Growth
  • Fulltime
Read More
Arrow Right

Senior Product Manager, SDK & AI

As Senior Product Manager of Fivetran’s SDK portfolio, you will own the portfoli...
Location
Location
United States , Denver, Colorado; Oakland, CA
Salary
Salary:
184800.00 - 231000.00 USD / Year
fivetran.com Logo
Fivetran
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • PM experience delivering developer-facing products (SDKs, APIs, platforms)
  • Hands-on experience with Python to do data analysis, integrating with APIs
  • Knowledge of container management/orchestration platforms, CI/CD, Terraform, and orchestrators such as Airflow or Dagster
  • Hands-on experience leveraging or building AI coding assistants (GitHub Copilot, CodeWhisperer, or custom LLM agents) to accelerate Python development, enforce best practices, and boost code accuracy
  • Proven track record of shipping products for technical users that have delivered measurable results
  • Proven ability to develop deep customer empathy and articulate customer problems
  • Demonstrated experience using data and analytical abilities to solve problems and make decisions
  • Excellent written and verbal communication skills
Job Responsibility
Job Responsibility
  • Strategy & roadmap – Prioritize features that cut average custom connector development time from hours to minutes and expand functionality and data movement options provided by the SDK offerings
  • Cross-functional leadership – Partner with Connector Engineering, Solutions Architects, Analytics, Support, and Marketing to launch and grow SDK features
  • Developer experience – Design intuitive code interfaces, keep the docs/code samples in lock-step, and champion first-run success
  • AI-assisted developer workflow – Integrate LLM-powered agents and coding copilots into the SDK tool-chain (e.g., CLI “generate connector” wizards, schema-mapping suggestions, automated test scaffolds) to cut time-to-first-run and reduce error rates
  • Quality & compatibility – Define upgrade and rollout strategies that ensure un-interrupted operation and high reliability and performance
  • Adoption & community metrics – Track revenue, revenue weight usage, and churn metrics for the core API as well as pypi downloads, example use and other community engagement metrics
  • iterating with DevRel
What we offer
What we offer
  • 100% employer-paid medical insurance
  • Generous paid time-off policy (PTO), plus paid sick time, inclusive parental leave policy, holidays, and volunteer days off
  • RSU stock grants
  • Professional development and training opportunities
  • Company virtual happy hours, free food, and fun team-building activities
  • Monthly cell phone stipend
  • Access to an innovative mental health support platform that offers personalized care and resources in areas such as: therapy, coaching, and self-guided mindfulness exercises for all covered employees and their covered dependents
  • Fulltime
Read More
Arrow Right