CrawlJobs Logo

Engineering Manager, Data Movement & Transformation

airbnb.com Logo

Airbnb

Location Icon

Location:
United States

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

204000.00 - 255000.00 USD / Year

Job Description:

The Data Movement and Transformation team is part of Airbnb’s Data Infrastructure group, which builds and operates the foundational platforms that power data across Airbnb. Our team focuses on providing a declarative, scalable, and reliable platform for defining and operating data movement and transformation pipelines across multiple storage systems. This platform enables teams across Airbnb to safely move, transform, and materialize data that powers critical product experiences and operational workflows. At the center of this effort is Airbus, Airbnb’s internal platform for managing large-scale data movement pipelines across databases and data infrastructure. Airbus enables teams to declaratively define pipelines while the platform automatically handles provisioning, orchestration, reliability, and schema evolution. With systems operating globally and supporting a wide range of use cases, scalability, reliability, efficiency, usability, and long-term platform sustainability are core to our mission.

Job Responsibility:

  • Partner with the Tech Lead and team to define and execute the long-term vision and multi-year roadmap for the Airbus Control Plane
  • Guide architectural direction and stay engaged with key technical designs, serving as a thought partner and sounding board for engineers and tech leads
  • Synthesize complex technical topics and represent the team’s work, priorities, and tradeoffs to senior leadership and partner teams
  • Collaborate with Data Movement, Online Data, and Offline Data teams to ensure Airbus integrates seamlessly across Airbnb’s broader data ecosystem
  • Hire and develop exceptional engineers, providing mentorship, feedback, and career growth opportunities
  • Foster a culture of engineering excellence and operational rigor, balancing thoughtful design with the ability to move fast responsibly
  • Represent Airbnb in open source communities and external partnerships to help shape the ecosystem around the technologies the team builds upon

Requirements:

  • 3+ years of engineering management experience
  • 8+ years of relevant software development experience in a fast-paced technology environment
  • Experience building and operating distributed systems, databases, or large-scale infrastructure services
  • Experience designing and evolving systems intended to operate reliably over long time horizons
  • Experience scaling teams and contributing to organizational design in growing engineering organizations
  • Strong familiarity with at least one major public cloud platform (AWS, GCP, or Azure) and core infrastructure primitives such as compute, storage, networking, Kubernetes, and security systems
  • Excellent communication skills and the ability to collaborate effectively across engineering and product organizations
What we offer:
  • bonus
  • equity
  • benefits
  • Employee Travel Credits

Additional Information:

Job Posted:
March 25, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Engineering Manager, Data Movement & Transformation

GTM Salesforce Data Quality & Enrichment Lead

As the GTM Salesforce Data Quality & Enrichment Lead at Vanta, you will own the ...
Location
Location
United States
Salary
Salary:
127000.00 - 149000.00 USD / Year
vanta.com Logo
Vanta
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of experience in data operations, CRM data management, or GTM systems with deep focus on data quality and enrichment
  • Strong technical proficiency with Salesforce data architecture including objects, fields, relationships, and data flows
  • Hands-on experience managing data enrichment platforms (ZoomInfo, Clearbit, Apollo, etc.) and vendor relationships
  • Proficiency with SQL/SOQL for data analysis, validation, and troubleshooting
  • Proven experience supporting end users (sales, marketing teams) with data-related questions and issues
  • Deep understanding of data quality frameworks including deduplication strategies, validation rules, and data governance
  • Experience managing data movement between Salesforce and external systems including data warehouses (Snowflake, etc.)
  • Strong analytical skills to monitor data health, identify quality issues, and implement observability frameworks
  • Experience with data acquisition strategy including budget management and vendor selection
  • Excellent cross-functional collaboration skills to work with Data Engineering, Operations, and GTM teams
Job Responsibility
Job Responsibility
  • Own the strategy and operations for all GTM data in Salesforce, including contacts, accounts, opportunities, and product data
  • Develop and execute the enrichment strategy, determining what data to enrich, when, and through which vendors
  • Manage relationships with data enrichment and acquisition vendors (ZoomInfo, Clearbit, etc.), including contract negotiations, optimization, and budget management
  • Provide front-line data support to end users (SDRs, AEs, CSMs), troubleshooting data issues and ensuring teams understand how to effectively use data
  • Establish and maintain data quality standards including integrity checks, accuracy validation, and deduplication strategy
  • Own data movement architecture—managing how data flows into Salesforce, out to other GTM systems, and into the data warehouse
  • Implement observability and monitoring frameworks to track data health, quality metrics, and system performance
  • Define and execute net new data acquisition strategy, including sourcing decisions, timing, budget allocation, and vendor selection
  • Understand data usage patterns across teams, establish purging policies, and monitor signals that indicate data lifecycle needs
  • Use SQL/SOQL to analyze data quality issues, validate transformations, and coordinate cross-functionally with Data Engineering and GTM Systems teams
What we offer
What we offer
  • Offers Equity
  • medical benefits
  • 401(k) plan
  • other company perk programs
  • Comprehensive medical, dental, and vision coverage, with 100% of employee-only benefit premiums covered for most medical plans
  • 16 weeks fully-paid Parental Leave for all new parents
  • Health & wellness stipend
  • Remote workspace, internet, and cellphone stipend
  • Commuter benefits for team members who report to the SF and NYC office
  • Family planning benefits
  • Fulltime
Read More
Arrow Right

Data Modeler

Join us as a Data Modeler at Barclays where you will spearhead the evolution of ...
Location
Location
India , Pune
Salary
Salary:
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Design and implementation of high-quality data models that optimize data access, storage, and analysis
  • Experience in ER Studio
  • Creation of comprehensive and well-maintained documentation of the data models, including entity relationship diagrams, data dictionaries, and usage guidelines
  • Partnering with business stakeholders to understand their data needs and translate these requirements into clear data modelling specifications
  • Collaboration with data engineers to translate into physical data models and throughout the development lifecycle
  • Continuously monitor and optimize the performance of the models to ensure efficient data retrieval and processing
  • Investigation and analysis of data issues related to quality, lineage, controls, and authoritative source identification
  • Execution of data cleansing and transformation tasks to prepare data for analysis
  • Designing and building data pipelines to automate data movement and processing
  • Development and application of advanced analytical techniques, including machine learning and AI, to solve complex business problems
Job Responsibility
Job Responsibility
  • Spearhead the evolution of our infrastructure and deployment pipelines, driving innovation and operational excellence
  • Harness cutting-edge technology to build and manage robust, scalable and secure infrastructure, ensuring seamless delivery of our digital solutions
  • Translate business requirements into logical and physical data models which serve as the basis for data engineers to build the data products
  • Work with Data Product Manager of the Value stream use case / BUK Data Domain to capture the requirements, translating these into data models while considering performance implications
  • Test the data models with data engineers and continuously monitoring and optimizing the performance of these data models
  • To implement data quality process and procedures, ensuring that data is reliable and trustworthy, then extract actionable insights from it to help the organisation improve its operation, and optimise resources
What we offer
What we offer
  • Hybrid working
  • Structured approach to hybrid working with fixed, ‘anchor’, days onsite
  • Supportive and inclusive culture and environment
  • Commitment to providing a supportive and inclusive culture
  • Opportunity to explore flexible working arrangements
  • Chance to learn from a globally diverse mix of colleagues, including some of the very best minds in banking, finance, technology and business
  • Encouragement to embrace mobility, exploring every part of our operations as you build your career
  • Fulltime
Read More
Arrow Right

Senior Data Integration Engineer

Crusoe's mission is to accelerate the abundance of energy and intelligence. We’r...
Location
Location
United States , Sunnyvale; San Francisco
Salary
Salary:
147000.00 - 178000.00 USD / Year
crusoe.ai Logo
Crusoe
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or Master's Degree in Computer Science, Data Science, Engineering, Information Technology, or equivalent experience of 5+ years of working experience as a Data Integration Engineer or similar role (e.g., Data Engineer, ETL Developer)
  • Has 3+ years of experience designing and implementing highly reliable, high-volume ETL/ELT pipelines
  • Expertise in cloud-based data warehousing and data lake solutions, specifically using Google Cloud Storage (GCS) and Google Cloud Platform (GCP) services
  • Strong proficiency with data integration/ETL platforms like Fivetran and Workato. Ideally has achieved the Workato Integration Developer Certificate
  • Proven experience with DBT (Data Build Tool) for data transformation and modeling in a cloud data warehouse environment
  • Experience with BI tools, preferably Sigma, for data visualization and reporting
  • Strong knowledge of SQL, data modeling (Kimball, Inmon), schema design, and database management
  • Demonstrates strong knowledge of EAI/SOA best practices, solution designs, and methodology & standards related to data movement
  • Can demonstrate prior experience with Role Based Access Controls, Data Management, Environmental Controls, and audit logs
  • Good written, oral, and interpersonal communication skills
Job Responsibility
Job Responsibility
  • Data Pipeline Development: Design, implement, and maintain scalable data pipelines (ETL/ELT) using primary tools like Fivetran, Workato, and DBT to move data between critical business systems, including PMIS, ERP, HCM, and cloud environments like GCS/GCP
  • Initial Project Focus: Lead the development of data integrations for our datacenter construction business, linking systems such as DCIS, PMIS, BIM, ERP, Cost Management, and Procurement
  • Data Lake Management: Build and manage data ingestion processes (ETL) to consolidate structured and unstructured data into a centralized Datalake built on GCS
  • Analytics Enablement: Ensure data quality and availability to support both business analytics & reporting, as well as complex forecasting and modeling initiatives
  • Reporting Tool Integration: Build the necessary data integrations to allow visualization and reporting using tools like Sigma and DBT
  • Continually meet with various business units to collect data requirements and propose and implement data pipeline enhancements and modernization
  • Prepare functional specifications (business requirements) and test data as needed for new integrations
  • Work with the Operations Team to create and maintain a roadmap of data integration projects
  • Maintain accurate documentation of code, designs, and integrations
  • including project tickets, knowledge bases, configuration documents, and as-built diagrams
What we offer
What we offer
  • Industry competitive pay
  • Restricted Stock Units in a fast growing, well-funded technology company
  • Health insurance package options that include HDHP and PPO, vision, and dental for you and your dependents
  • Employer contributions to HSA accounts
  • Paid Parental Leave
  • Paid life insurance, short-term and long-term disability
  • Teladoc
  • 401(k) with a 100% match up to 4% of salary
  • Generous paid time off and holiday schedule
  • Cell phone reimbursement
  • Fulltime
Read More
Arrow Right

Data Modeller

Join us as a Data Modeller at Barclays, where you'll spearhead the evolution of ...
Location
Location
India , Pune
Salary
Salary:
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Partner with business stakeholders to understand their data needs and their desired functionality for the data product. Translate these requirements into clear data modelling specifications
  • Designs and implements high-quality data models that optimize data access, storage, and analysis as well as ensuring alignment to BCDM
  • Creates comprehensive and well-maintained documentation of the data models, including entity relationship diagrams, data dictionaries, and usage guidelines
  • Collaborates with data engineers to test and validate the data models
  • Obtains sign-off from the DPL, DPA and the Technical Product Lead on the logical and physical data models
  • Continuously monitor and optimise the performance of the models and data solutions to ensure efficient data retrieval & processing
  • Collaborates with data engineers to translate into physical data models and throughout the development lifecycle
  • Manages the business as usual ‘BAU’ data model and solution covering enhancements and changes
  • Helps DSA in defining the legacy estate migration and decommissioning roadmap for the assigned BUK data domain
  • You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills
Job Responsibility
Job Responsibility
  • Investigation and analysis of data issues related to quality, lineage, controls, and authoritative source identification, documenting data sources, methodologies, and quality findings with recommendations for improvement
  • Designing and building data pipelines to automate data movement and processing
  • Apply advanced analytical techniques to large datasets to uncover trends and correlations, develop validated logical data models, and translate insights into actionable business recommendations that drive operational and process improvements, leveraging machine learning/AI
  • Through data-driven analysis, translate analytical findings into actionable business recommendations, identifying opportunities for operational and process improvements
  • Design and create interactive dashboards and visual reports using applicable tools and automate reporting processes for regular and ad-hoc stakeholder needs
  • To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions
  • Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes
  • If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others
  • OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes
  • Consult on complex issues
What we offer
What we offer
  • Hybrid working
  • We’re committed to providing a supportive and inclusive culture and environment for you to work in
  • We celebrate the unique perspectives and experiences each individual brings, believing our differences make us stronger and drive success
  • Fulltime
Read More
Arrow Right
New

Amazon S3 Engineer

Role: Amazon S3 Engineer Location: Charlotte, NC / Plano, TX FTE only Job Des...
Location
Location
United States , Plano; Charlotte
Salary
Salary:
125000.00 USD / Year
realign-llc.com Logo
Realign
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Amazon Data Engineer
  • AWS Data Engineer
  • Amazon S3
  • Shell Scripting
  • Autosys
  • Minimum 10 years experience
  • Amazon S3: Data storage, retrieval, and management
  • scripting for ETL data transfer
  • advanced features including versioning, lifecycle policies, access controls, and server-side encryption
  • automation of data movement using Python, Shell, or similar languages
Job Responsibility
Job Responsibility
  • Design, develop, and execute Data Pipelines and test cases to ensure data integrity and quality
  • Develop, implement, and optimize data pipelines that integrate Amazon S3 for scalable data storage, retrieval, and processing within ETL workflows
  • Leverage Amazon S3 for data storage, retrieval, and management within ETL workflows, including the ability to write scripts for data transfer between S3 and other systems
  • Utilize Amazon S3's advanced features such as versioning, lifecycle policies, access controls, and server-side encryption to ensure secure and efficient data management
  • Write, maintain, and troubleshoot scripts or code (using PySpark, Shell, or similar languages) to automate data movement between Amazon S3 and other platforms, ensuring high performance and reliability
  • Collaborate with cross-functional teams to troubleshoot and resolve data-related issues, utilizing Amazon S3 features such as versioning, lifecycle policies, and access management
  • Document ETL processes, maintain technical documentation, and ensure best practices are followed for data stored in Amazon S3 environments
  • Familiarity with Hadoop or Spark is often preferred
  • Validate HiveQL, HDFS file structures, and data processing within the Hadoop cluster
  • Strong analytical and troubleshooting skills
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Data Engineering at Blackbaud is responsible for ingestion, transformation and p...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
blackbaud.com Logo
Blackbaud
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software and/or data engineering experience
  • Experience with big data technologies such as Spark, Databricks, Delta lakes, Hive, etc.
  • 4+ years of experience in core languages such as Python, Scala or Java (Preferably Python)
  • Hands-on experience leveraging PaaS offerings in a public cloud environment (Azure preferred)
  • Experience with Big Data design patterns and distributed computing tools/frameworks
  • Proven expertise building big data pipelines (batch processing, real-time streaming)
  • Working knowledge of TDD, CI/CD concepts and tools (Preferably Azure DevOps)
  • Advanced understanding of unit testing/integration testing/QA/Validation
  • Knowledge of data security and authentication protocols
  • Experience in areas of data governance, privacy and regulation and professional experience with architectural approaches to data security
Job Responsibility
Job Responsibility
  • Design, develop and operate high performance, large volume data structures for data-powered products and data science
  • Implement efficient, distributed and scalable pipelines and integrate data from multiple sources to create data products
  • Implement design patterns that support data ingestion, data movement, transformation, aggregation, and much more
  • Collaborate with Product managers, Software engineers, and Data Scientists, and work towards achieving key results
  • Build high quality production level code, test and deploy data pipelines
  • Undertake complex tasks that are large and diverse in scope and/or critical in nature
  • Design and develop breakthrough products, services or technological advancements that expand our business
  • Participate in code reviews, brainstorm design approaches with peers and mentor junior engineers
  • Work in fast paced agile environment, participate in scrum ceremonies and help ensure that the team meets sprint commitments
Read More
Arrow Right

Data Engineer

Join the largest, fastest-growing specialist family law firm in the country. We ...
Location
Location
United Kingdom , Leeds; Manchester; Harrogate
Salary
Salary:
45000.00 - 50000.00 GBP / Year
stowefamilylaw.co.uk Logo
Stowe Family Law
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong SQL skills and understanding of relational databases
  • Experience in a data analyst or reporting role
  • Understanding of data warehousing and modelling concepts
  • A genuine interest in progressing into a data engineering role
  • A sharp, detail-oriented problem solver
Job Responsibility
Job Responsibility
  • Support the development and maintenance of data pipelines and workflows
  • Use Azure Data Factory to automate and orchestrate data movement
  • Write and optimise SQL queries to transform and manage data
  • Assist with data modelling and transformations (dbt)
  • Collaborate with our BI team to deliver clean, reliable datasets for Power BI
  • Ensure data quality, consistency, and governance across systems
  • Integrate data from internal and external sources, including APIs
  • Work closely with stakeholders to understand reporting and data needs
  • Helping acquired firms build data capability from scratch
What we offer
What we offer
  • Bonus
  • A wellbeing culture, including Mental Wellbeing days, and access to counselling sessions
  • Volunteering leave
  • Diversity public holidays
  • 26 days holiday
  • Enhanced adoption, maternity and paternity pay
  • Paid leave for fertility treatment
  • Emergency dependants leave
  • Bereavement leave
  • Medicash health insurance - 24/7 GP’s, dental, counselling, gym discounts
  • Fulltime
Read More
Arrow Right

Amazon S3 Engineer

Role: Amazon S3 Engineer. Location: Charlotte, NC / Plano, TX. FTE only.
Location
Location
United States , Charlotte, NC / Plano, TX
Salary
Salary:
125000.00 USD / Year
realign-llc.com Logo
Realign
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Amazon Data Engineer
  • AWS Data Engineer
  • Amazon S3
  • Shell Scripting
  • Autosys
  • Minimum 10 years experience
  • PySpark
  • SQL
  • Oracle
  • Banking knowledge
Job Responsibility
Job Responsibility
  • Design, develop, and execute Data Pipelines and test cases to ensure data integrity and quality
  • Develop, implement, and optimize data pipelines that integrate Amazon S3 for scalable data storage, retrieval, and processing within ETL workflows
  • Leverage Amazon S3 for data storage, retrieval, and management within ETL workflows, including the ability to write scripts for data transfer between S3 and other systems
  • Utilize Amazon S3's advanced features such as versioning, lifecycle policies, access controls, and server-side encryption to ensure secure and efficient data management
  • Write, maintain, and troubleshoot scripts or code (using PySpark, Shell, or similar languages) to automate data movement between Amazon S3 and other platforms, ensuring high performance and reliability
  • Collaborate with cross-functional teams to troubleshoot and resolve data-related issues, utilizing Amazon S3 features such as versioning, lifecycle policies, and access management
  • Document ETL processes, maintain technical documentation, and ensure best practices are followed for data stored in Amazon S3 environments
  • Validate HiveQL, HDFS file structures, and data processing within the Hadoop cluster
  • Knowledge in Metadata dependent ETL process and batch/job framework
  • Fulltime
Read More
Arrow Right