This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Payments team at Airbnb is crucial to the company's operations, handling critical data related to compliance with Tax, Payments, and Legal regulations. We also manage application data for tools such as CRM, Jira, and Workday, which are essential for Airbnb’s business. Joining this team means working with cross-functional stakeholders, designing scalable solutions, and contributing to a world-class data engineering environment with an emphasis on quality, scalability, and robust engineering practices. Our team charter is on enabling Airbnb to comply with Tax, Payments,and Legal regulations so that our Hosts can continue to operate in regulated geos. Our products ingest, process, validate, and deliver large datasets to government authorities (often partnered with tax remittance) so our data must be of the highest accuracy and quality.
Job Responsibility:
Design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, listing details, and external data feeds
Develop data models that enable the efficient analysis and manipulation of data for merchandising optimization. Ensure data quality, consistency, and accuracy
Collaborate with cross-functional teams, including Data Scientists, Product Managers, and Software Engineers, to define data requirements, and deliver data solutions that drive merchandising and sales improvements
Contribute to the broader Data Engineering community at Airbnb to influence tooling and standards to improve culture and productivity
Improve code and data quality by leveraging and contributing to internal tools to automatically detect and mitigate issues
Requirements:
6+ years of relevant industry experience
BE/B.tech in computer science or relevant technical degree
Extensive experience designing, building, and operating robust distributed data platforms (e.g., Spark, Kafka, Flink, HBase) and handling data at the petabyte scale
Strong knowledge of Java, Scala, or Python, and expertise with data processing technologies and query authoring (SQL)
Demonstrated ability to analyze large data sets to identify gaps and inconsistencies, provide data insights, and advance effective product solutions
Expertise with ETL schedulers such as Apache Airflow, Luigi, Oozie, AWS Glue or similar frameworks
Solid understanding of data warehousing concepts and hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and columnar databases (e.g., Redshift, BigQuery, HBase, ClickHouse)