CrawlJobs Logo

Engineering Manager, Data Movement & Transformation

airbnb.com Logo

Airbnb

Location Icon

Location:
United States

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

204000.00 - 255000.00 USD / Year

Job Description:

The Data Movement and Transformation team is part of Airbnb’s Data Infrastructure group, which builds and operates the foundational platforms that power data across Airbnb. Our team focuses on providing a declarative, scalable, and reliable platform for defining and operating data movement and transformation pipelines across multiple storage systems. This platform enables teams across Airbnb to safely move, transform, and materialize data that powers critical product experiences and operational workflows. At the center of this effort is Airbus, Airbnb’s internal platform for managing large-scale data movement pipelines across databases and data infrastructure. Airbus enables teams to declaratively define pipelines while the platform automatically handles provisioning, orchestration, reliability, and schema evolution. With systems operating globally and supporting a wide range of use cases, scalability, reliability, efficiency, usability, and long-term platform sustainability are core to our mission.

Job Responsibility:

  • Partner with the Tech Lead and team to define and execute the long-term vision and multi-year roadmap for the Airbus Control Plane
  • Guide architectural direction and stay engaged with key technical designs, serving as a thought partner and sounding board for engineers and tech leads
  • Synthesize complex technical topics and represent the team’s work, priorities, and tradeoffs to senior leadership and partner teams
  • Collaborate with Data Movement, Online Data, and Offline Data teams to ensure Airbus integrates seamlessly across Airbnb’s broader data ecosystem
  • Hire and develop exceptional engineers, providing mentorship, feedback, and career growth opportunities
  • Foster a culture of engineering excellence and operational rigor, balancing thoughtful design with the ability to move fast responsibly
  • Represent Airbnb in open source communities and external partnerships to help shape the ecosystem around the technologies the team builds upon

Requirements:

  • 3+ years of engineering management experience
  • 8+ years of relevant software development experience in a fast-paced technology environment
  • Experience building and operating distributed systems, databases, or large-scale infrastructure services
  • Experience designing and evolving systems intended to operate reliably over long time horizons
  • Experience scaling teams and contributing to organizational design in growing engineering organizations
  • Strong familiarity with at least one major public cloud platform (AWS, GCP, or Azure) and core infrastructure primitives such as compute, storage, networking, Kubernetes, and security systems
  • Excellent communication skills and the ability to collaborate effectively across engineering and product organizations
What we offer:
  • bonus
  • equity
  • benefits
  • Employee Travel Credits

Additional Information:

Job Posted:
March 25, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Engineering Manager, Data Movement & Transformation

GTM Salesforce Data Quality & Enrichment Lead

As the GTM Salesforce Data Quality & Enrichment Lead at Vanta, you will own the ...
Location
Location
United States
Salary
Salary:
127000.00 - 149000.00 USD / Year
vanta.com Logo
Vanta
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of experience in data operations, CRM data management, or GTM systems with deep focus on data quality and enrichment
  • Strong technical proficiency with Salesforce data architecture including objects, fields, relationships, and data flows
  • Hands-on experience managing data enrichment platforms (ZoomInfo, Clearbit, Apollo, etc.) and vendor relationships
  • Proficiency with SQL/SOQL for data analysis, validation, and troubleshooting
  • Proven experience supporting end users (sales, marketing teams) with data-related questions and issues
  • Deep understanding of data quality frameworks including deduplication strategies, validation rules, and data governance
  • Experience managing data movement between Salesforce and external systems including data warehouses (Snowflake, etc.)
  • Strong analytical skills to monitor data health, identify quality issues, and implement observability frameworks
  • Experience with data acquisition strategy including budget management and vendor selection
  • Excellent cross-functional collaboration skills to work with Data Engineering, Operations, and GTM teams
Job Responsibility
Job Responsibility
  • Own the strategy and operations for all GTM data in Salesforce, including contacts, accounts, opportunities, and product data
  • Develop and execute the enrichment strategy, determining what data to enrich, when, and through which vendors
  • Manage relationships with data enrichment and acquisition vendors (ZoomInfo, Clearbit, etc.), including contract negotiations, optimization, and budget management
  • Provide front-line data support to end users (SDRs, AEs, CSMs), troubleshooting data issues and ensuring teams understand how to effectively use data
  • Establish and maintain data quality standards including integrity checks, accuracy validation, and deduplication strategy
  • Own data movement architecture—managing how data flows into Salesforce, out to other GTM systems, and into the data warehouse
  • Implement observability and monitoring frameworks to track data health, quality metrics, and system performance
  • Define and execute net new data acquisition strategy, including sourcing decisions, timing, budget allocation, and vendor selection
  • Understand data usage patterns across teams, establish purging policies, and monitor signals that indicate data lifecycle needs
  • Use SQL/SOQL to analyze data quality issues, validate transformations, and coordinate cross-functionally with Data Engineering and GTM Systems teams
What we offer
What we offer
  • Offers Equity
  • medical benefits
  • 401(k) plan
  • other company perk programs
  • Comprehensive medical, dental, and vision coverage, with 100% of employee-only benefit premiums covered for most medical plans
  • 16 weeks fully-paid Parental Leave for all new parents
  • Health & wellness stipend
  • Remote workspace, internet, and cellphone stipend
  • Commuter benefits for team members who report to the SF and NYC office
  • Family planning benefits
  • Fulltime
Read More
Arrow Right

Data Modeler

Join us as a Data Modeler at Barclays where you will spearhead the evolution of ...
Location
Location
India , Pune
Salary
Salary:
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Design and implementation of high-quality data models that optimize data access, storage, and analysis
  • Experience in ER Studio
  • Creation of comprehensive and well-maintained documentation of the data models, including entity relationship diagrams, data dictionaries, and usage guidelines
  • Partnering with business stakeholders to understand their data needs and translate these requirements into clear data modelling specifications
  • Collaboration with data engineers to translate into physical data models and throughout the development lifecycle
  • Continuously monitor and optimize the performance of the models to ensure efficient data retrieval and processing
  • Investigation and analysis of data issues related to quality, lineage, controls, and authoritative source identification
  • Execution of data cleansing and transformation tasks to prepare data for analysis
  • Designing and building data pipelines to automate data movement and processing
  • Development and application of advanced analytical techniques, including machine learning and AI, to solve complex business problems
Job Responsibility
Job Responsibility
  • Spearhead the evolution of our infrastructure and deployment pipelines, driving innovation and operational excellence
  • Harness cutting-edge technology to build and manage robust, scalable and secure infrastructure, ensuring seamless delivery of our digital solutions
  • Translate business requirements into logical and physical data models which serve as the basis for data engineers to build the data products
  • Work with Data Product Manager of the Value stream use case / BUK Data Domain to capture the requirements, translating these into data models while considering performance implications
  • Test the data models with data engineers and continuously monitoring and optimizing the performance of these data models
  • To implement data quality process and procedures, ensuring that data is reliable and trustworthy, then extract actionable insights from it to help the organisation improve its operation, and optimise resources
What we offer
What we offer
  • Hybrid working
  • Structured approach to hybrid working with fixed, ‘anchor’, days onsite
  • Supportive and inclusive culture and environment
  • Commitment to providing a supportive and inclusive culture
  • Opportunity to explore flexible working arrangements
  • Chance to learn from a globally diverse mix of colleagues, including some of the very best minds in banking, finance, technology and business
  • Encouragement to embrace mobility, exploring every part of our operations as you build your career
  • Fulltime
Read More
Arrow Right

Director Product Manager (Data Orchestration and Insights)

As Director Product Manager, you will define and execute the strategy to establi...
Location
Location
United States , Palo Alto
Salary
Salary:
240000.00 - 325000.00 USD / Year
workato.com Logo
Workato
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12+ years of product management experience in SaaS or B2B environments, specializing in data management, data orchestration, or infrastructure products
  • Proven success in shipping and scaling complex data products with measurable business impact
  • Strong track record in leading cross-functional teams, influencing product strategy, and driving execution in fast-paced environments
  • Deep expertise in ETL, ELT, Reverse ETL, and data activation pipelines
  • Strong understanding of modern data architecture, including data lakes, data warehouses, structured and semi-structured data processing
  • Experience with data transformation tools (DBT, Coalesce) and orchestration frameworks (Airflow, Dagster) to build scalable pipelines
  • Knowledge of real-time data movement, databases (Oracle, SQL Server, PostgreSQL), and cloud analytics platforms (Snowflake, Databricks, BigQuery)
  • Familiarity with emerging data technologies like Open Table Format, Apache Iceberg, and their impact on enterprise data strategies
  • Hands-on experience with data virtualization and analytics platforms (Denodo, Domo) to enable seamless self-service data exploration and analytics
  • Strong background in cloud platforms (AWS, Azure, Google Cloud) and their data ecosystems
Job Responsibility
Job Responsibility
  • Develop and execute the product strategy for a unified data orchestration platform supporting ETL, ELT, and Reverse ETL (data activation) across SaaS, data warehouses, data lakes, and custom sources
  • Define and prioritize built-in transformation capabilities and integrations with tools like DBT and Coalesce to scale ELT pipelines efficiently
  • Ensure seamless data ingestion, movement, and activation across structured, semi-structured, and unstructured data formats
  • Embed data quality, lineage, governance, and operational analytics as core platform features, ensuring enterprises have built-in compliance and data integrity controls
  • Develop native observability and automation tools to monitor pipeline performance, detect anomalies, and proactively enforce data governance policies
  • Ensure the platform meets enterprise security, compliance, and scalability requirements, making Workato the go-to orchestration solution for large-scale deployments
  • Leverage AI to enhance data classification, transformation recommendations, and self-healing pipelines that minimize operational overhead
  • Integrate predictive analytics and semantic enrichment to automate data mapping, improve pipeline efficiency, and surface actionable insights
  • Work with AI research teams to infuse machine learning into Workato’s data services, driving continuous optimization and smarter decision-making
  • Architect a self-service data virtualization platform that provides a self service experience, enabling users to explore and analyze data from third-party apps, data warehouses, data lakes, Workato usage data, and custom datasets in real time
What we offer
What we offer
  • vibrant and dynamic work environment
  • multitude of benefits they can enjoy inside and outside of their work lives
  • perks
  • equity
Read More
Arrow Right

Senior Data Integration Engineer

Crusoe's mission is to accelerate the abundance of energy and intelligence. We’r...
Location
Location
United States , Sunnyvale; San Francisco
Salary
Salary:
147000.00 - 178000.00 USD / Year
crusoe.ai Logo
Crusoe
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or Master's Degree in Computer Science, Data Science, Engineering, Information Technology, or equivalent experience of 5+ years of working experience as a Data Integration Engineer or similar role (e.g., Data Engineer, ETL Developer)
  • Has 3+ years of experience designing and implementing highly reliable, high-volume ETL/ELT pipelines
  • Expertise in cloud-based data warehousing and data lake solutions, specifically using Google Cloud Storage (GCS) and Google Cloud Platform (GCP) services
  • Strong proficiency with data integration/ETL platforms like Fivetran and Workato. Ideally has achieved the Workato Integration Developer Certificate
  • Proven experience with DBT (Data Build Tool) for data transformation and modeling in a cloud data warehouse environment
  • Experience with BI tools, preferably Sigma, for data visualization and reporting
  • Strong knowledge of SQL, data modeling (Kimball, Inmon), schema design, and database management
  • Demonstrates strong knowledge of EAI/SOA best practices, solution designs, and methodology & standards related to data movement
  • Can demonstrate prior experience with Role Based Access Controls, Data Management, Environmental Controls, and audit logs
  • Good written, oral, and interpersonal communication skills
Job Responsibility
Job Responsibility
  • Data Pipeline Development: Design, implement, and maintain scalable data pipelines (ETL/ELT) using primary tools like Fivetran, Workato, and DBT to move data between critical business systems, including PMIS, ERP, HCM, and cloud environments like GCS/GCP
  • Initial Project Focus: Lead the development of data integrations for our datacenter construction business, linking systems such as DCIS, PMIS, BIM, ERP, Cost Management, and Procurement
  • Data Lake Management: Build and manage data ingestion processes (ETL) to consolidate structured and unstructured data into a centralized Datalake built on GCS
  • Analytics Enablement: Ensure data quality and availability to support both business analytics & reporting, as well as complex forecasting and modeling initiatives
  • Reporting Tool Integration: Build the necessary data integrations to allow visualization and reporting using tools like Sigma and DBT
  • Continually meet with various business units to collect data requirements and propose and implement data pipeline enhancements and modernization
  • Prepare functional specifications (business requirements) and test data as needed for new integrations
  • Work with the Operations Team to create and maintain a roadmap of data integration projects
  • Maintain accurate documentation of code, designs, and integrations
  • including project tickets, knowledge bases, configuration documents, and as-built diagrams
What we offer
What we offer
  • Industry competitive pay
  • Restricted Stock Units in a fast growing, well-funded technology company
  • Health insurance package options that include HDHP and PPO, vision, and dental for you and your dependents
  • Employer contributions to HSA accounts
  • Paid Parental Leave
  • Paid life insurance, short-term and long-term disability
  • Teladoc
  • 401(k) with a 100% match up to 4% of salary
  • Generous paid time off and holiday schedule
  • Cell phone reimbursement
  • Fulltime
Read More
Arrow Right

Data Modeller

Join us as a Data Modeller at Barclays, where you'll spearhead the evolution of ...
Location
Location
India , Pune
Salary
Salary:
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Partner with business stakeholders to understand their data needs and their desired functionality for the data product. Translate these requirements into clear data modelling specifications
  • Designs and implements high-quality data models that optimize data access, storage, and analysis as well as ensuring alignment to BCDM
  • Creates comprehensive and well-maintained documentation of the data models, including entity relationship diagrams, data dictionaries, and usage guidelines
  • Collaborates with data engineers to test and validate the data models
  • Obtains sign-off from the DPL, DPA and the Technical Product Lead on the logical and physical data models
  • Continuously monitor and optimise the performance of the models and data solutions to ensure efficient data retrieval & processing
  • Collaborates with data engineers to translate into physical data models and throughout the development lifecycle
  • Manages the business as usual ‘BAU’ data model and solution covering enhancements and changes
  • Helps DSA in defining the legacy estate migration and decommissioning roadmap for the assigned BUK data domain
  • You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills
Job Responsibility
Job Responsibility
  • Investigation and analysis of data issues related to quality, lineage, controls, and authoritative source identification, documenting data sources, methodologies, and quality findings with recommendations for improvement
  • Designing and building data pipelines to automate data movement and processing
  • Apply advanced analytical techniques to large datasets to uncover trends and correlations, develop validated logical data models, and translate insights into actionable business recommendations that drive operational and process improvements, leveraging machine learning/AI
  • Through data-driven analysis, translate analytical findings into actionable business recommendations, identifying opportunities for operational and process improvements
  • Design and create interactive dashboards and visual reports using applicable tools and automate reporting processes for regular and ad-hoc stakeholder needs
  • To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions
  • Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes
  • If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others
  • OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes
  • Consult on complex issues
What we offer
What we offer
  • Hybrid working
  • We’re committed to providing a supportive and inclusive culture and environment for you to work in
  • We celebrate the unique perspectives and experiences each individual brings, believing our differences make us stronger and drive success
  • Fulltime
Read More
Arrow Right
New

Amazon S3 Engineer

Role: Amazon S3 Engineer. FTE only.
Location
Location
United States , Charlotte, NC / Plano, TX
Salary
Salary:
125000.00 USD / Year
realign-llc.com Logo
Realign
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Amazon Data Engineer
  • AWS Data Engineer
  • Amazon S3
  • Shell Scripting
  • Autosys
  • Minimum 10 years experience
  • PySpark
  • SQL
  • Oracle
  • Banking knowledge
Job Responsibility
Job Responsibility
  • Design, develop, and execute Data Pipelines and test cases to ensure data integrity and quality
  • Develop, implement, and optimize data pipelines that integrate Amazon S3 for scalable data storage, retrieval, and processing within ETL workflows
  • Leverage Amazon S3 for data storage, retrieval, and management within ETL workflows, including the ability to write scripts for data transfer between S3 and other systems
  • Utilize Amazon S3's advanced features such as versioning, lifecycle policies, access controls, and server-side encryption to ensure secure and efficient data management
  • Write, maintain, and troubleshoot scripts or code (using PySpark, Shell, or similar languages) to automate data movement between Amazon S3 and other platforms, ensuring high performance and reliability
  • Collaborate with cross-functional teams to troubleshoot and resolve data-related issues, utilizing Amazon S3 features such as versioning, lifecycle policies, and access management
  • Document ETL processes, maintain technical documentation, and ensure best practices are followed for data stored in Amazon S3 environments
  • Validate HiveQL, HDFS file structures, and data processing within the Hadoop cluster
  • Knowledge in Metadata dependent ETL process and batch/job framework
  • Fulltime
Read More
Arrow Right
New

Amazon S3 Engineer

Role: Amazon S3 Engineer. Location: Charlotte, NC / Plano, TX. FTE only.
Location
Location
United States , Charlotte, NC / Plano, TX
Salary
Salary:
125000.00 USD / Year
realign-llc.com Logo
Realign
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Amazon Data Engineer
  • AWS Data Engineer
  • Amazon S3
  • Shell Scripting
  • Autosys
  • Minimum 10 years experience
  • PySpark
  • SQL
  • Oracle
  • Banking knowledge
Job Responsibility
Job Responsibility
  • Design, develop, and execute Data Pipelines and test cases to ensure data integrity and quality
  • Develop, implement, and optimize data pipelines that integrate Amazon S3 for scalable data storage, retrieval, and processing within ETL workflows
  • Leverage Amazon S3 for data storage, retrieval, and management within ETL workflows, including the ability to write scripts for data transfer between S3 and other systems
  • Utilize Amazon S3's advanced features such as versioning, lifecycle policies, access controls, and server-side encryption to ensure secure and efficient data management
  • Write, maintain, and troubleshoot scripts or code (using PySpark, Shell, or similar languages) to automate data movement between Amazon S3 and other platforms, ensuring high performance and reliability
  • Collaborate with cross-functional teams to troubleshoot and resolve data-related issues, utilizing Amazon S3 features such as versioning, lifecycle policies, and access management
  • Document ETL processes, maintain technical documentation, and ensure best practices are followed for data stored in Amazon S3 environments
  • Validate HiveQL, HDFS file structures, and data processing within the Hadoop cluster
  • Knowledge in Metadata dependent ETL process and batch/job framework
  • Fulltime
Read More
Arrow Right
New

Azure Data Engineer

Location
Location
Canada , Toronto
Salary
Salary:
140000.00 CAD / Year
realign-llc.com Logo
Realign
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong understanding of data warehousing, data lakes, and best practices for data quality, security, and performance
  • Design, develop, deploy, and maintain scalable data pipelines and ETL/ELT processes using Azure Data Factory (ADF), Azure Databricks, and other Azure data services (e.g., Azure Data Lake Storage, Azure Synapse Analytics)
  • Implement efficient, reusable, and testable code for data transformation, cleansing, and analysis using Python and PySpark within Databricks notebooks and jobs
  • Use ADF pipelines to orchestrate data movement and trigger Databricks notebooks or jobs, ensuring seamless integration between various data sources and destinations
  • Manage data storage, ensure data integrity, optimize data processing, and troubleshoot performance issues related to Databricks and ADF solutions
  • Collaborate closely with data scientists, data analysts, and business stakeholders to gather requirements and translate business needs into technical specifications and data solutions
  • Implement monitoring, error handling, and security best practices across all data workflows and ensure compliance with data governance standards
  • Create and maintain comprehensive technical documentation for data architectures, pipelines, and processes
  • Proven experience in data engineering, data warehousing, or similar
  • Fulltime
Read More
Arrow Right