CrawlJobs Logo

Staff Data Platform Engineer

airwallex.com Logo

Airwallex

Location Icon

Location:
Singapore , Singapore

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

As a high level architect (staff engineer), you will oversee the strategy, architecture, development, and operation of Airwallex's data and AI platforms. You will play a pivotal role in influencing data-driven and AI-powered decision-making processes. A key aspect of your role will also include building a high-performing team that excels in a fast-paced environment. Your leadership will be critical in mentoring the team, promoting a culture of innovation, and driving technical excellence.

Job Responsibility:

  • Spearheaded the identification and resolution of Airwallex-wide challenges using cutting-edge data platform solutions
  • Provide visionary technical direction, fostering a community within Airwallex's data realm, and actively leading in solving complex problems hands-on
  • Advocate for best practices across the data platform, instilling a culture of craftsmanship and innovation
  • Mentor the data platform team, nurturing both technical and professional development.

Requirements:

  • A minimum of 8 years of experience in Data Platform or an equivalent combination of work and academic exposure in a quantitative field
  • Proven experience leading company-wide initiatives across multiple teams or influencing tech roadmap planning
  • Effective collaboration with diverse teams and stakeholders to drive tangible business outcomes
  • Demonstrated ability to balance execution and velocity with in-depth research, statistical understanding, and scalable design
  • Track record of mentoring and investing in the development of scientists, engineers, and peers
  • Experience providing technical leadership on significant projects, covering ETL frameworks, metrics stores, infrastructure management, and data security
  • Proficiency in building, deploying, and maintaining reliable multi-geographical data pipelines at scale
  • Familiarity with workflow or orchestration frameworks such as Airflow, DBT, etc.

Nice to have:

  • Hands-on design experience in crafting data processing patterns for a modern Lakehouse architecture
  • Contribute to the design and development of standard framework modules, high-performance services, and client libraries for big data using tools like GCP, Databricks, BigQuery, DataProc, Kafka, Kubernetes, Spark, DataFlow, Google Cloud Storage, and Airflow
  • Excellent written and verbal communication skills tailored for diverse audiences (leadership, users, company-wide)
  • Ability to rapidly evaluate various technologies and conduct proof of concepts to drive architecture design
  • Experience thriving in a complex environment.
What we offer:
  • Competitive salary plus valuable equity within Airwallex
  • Collaborative open office space with a fully stocked kitchen
  • Regular team-building events
  • Freedom to be creative.

Additional Information:

Job Posted:
February 21, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Staff Data Platform Engineer

Staff Data Engineer

Checkr is hiring an experienced Staff Data Engineer to join their Data Platform ...
Location
Location
United States , San Francisco; Denver
Salary
Salary:
166000.00 - 230000.00 USD / Year
https://checkr.com Logo
Checkr
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of designing, implementing and delivering highly scalable and performant data platform
  • experience building large-scale (100s of Terabytes and Petabytes) data processing pipelines - batch and stream
  • experience with ETL/ELT, stream and batch processing of data at scale
  • expert level proficiency in PySpark, Python, and SQL
  • expertise in data modeling, relational databases, NoSQL (such as MongoDB) data stores
  • experience with big data technologies such as Kafka, Spark, Iceberg, Datalake, and AWS stack (EKS, EMR, Serverless, Glue, Athena, S3, etc.)
  • an understanding of Graph and Vector data stores (preferred)
  • knowledge of security best practices and data privacy concerns
  • strong problem-solving skills and attention to detail
  • experience/knowledge of data processing platforms such as Databricks or Snowflake.
Job Responsibility
Job Responsibility
  • Architect, design, lead and build end-to-end performant, reliable, scalable data platform
  • monitor, investigate, triage, and resolve production issues as they arise for services owned by the team
  • mentor, guide and work with junior engineers to deliver complex and next-generation features
  • partner with engineering, product, design, and other stakeholders in designing and architecting new features
  • create and maintain data pipelines and foundational datasets to support product/business needs
  • experiment with rapid MVPs and encourage validation of customer needs
  • design and build database architectures with massive and complex data
  • develop audits for data quality at scale
  • create scalable dashboards and reports to support business objectives and enable data-driven decision-making
  • troubleshoot and resolve complex issues in production environments.
What we offer
What we offer
  • A fast-paced and collaborative environment
  • learning and development allowance
  • competitive cash and equity compensation and opportunity for advancement
  • 100% medical, dental, and vision coverage
  • up to $25K reimbursement for fertility, adoption, and parental planning services
  • flexible PTO policy
  • monthly wellness stipend
  • home office stipend
  • in-office perks such as lunch four times a week, a commuter stipend, and an abundance of snacks and beverages.
  • Fulltime
Read More
Arrow Right

Staff Data Engineer

We are looking for a Staff Data Engineer to take ownership of Blinq’s data engin...
Location
Location
Australia , Sydney; Melbourne
Salary
Salary:
Not provided
blinq.me Logo
Blinq Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Extensive experience with SQL, data modeling tools like Dataform, and programming languages such as Python or R and data system architecture designs
  • Hands-on expertise with a modern event data stack and familiarity with tools such as Segment, Amplitude, BigQuery, Looker Studio, Google Analytics, SSGTM, CDP and AWS/GCP
  • A strong vision for building a scalable, reliable, and cutting-edge analytics platform, with the ability to develop a roadmap to achieve this
  • A solid grounding in mathematics, statistics, and data visualisation
  • Experience in MarTech and working with data systems that support machine learning workflows is desirable
  • High proficiency in A/B testing and event system design is also highly desirable
Job Responsibility
Job Responsibility
  • Building and optimising data pipelines which are scalable to support the collection, transformation, and loading (ETL) of data into databases, warehouses or lakes
  • Ensuring data quality and accuracy by implementing robust validation and observability processes
  • Collaborating closely with Product, Marketing, Sales and Engineering teams to align data systems with business goals and measure success effectively
  • Driving cross-functional efforts to address data integrity, reporting, and insights, while remediating inconsistencies and gaps
  • Documenting data processes, architecture, and workflows to ensure clarity and continuity across teams
What we offer
What we offer
  • Equity & ownership
  • Competitive salary & growth path
  • Generous paid time off: At least 20 days fully disconnect each year, with a flexible policy beyond that
  • Parental leave that grows with you: 12 to 26 weeks full pay, based on tenure
  • Free food: Enjoy daily breakfast and lunch at some of our offices, plus an always-stocked snack bar
  • Fulltime
Read More
Arrow Right

Staff Data Engineer

We’re looking for a Staff Data Engineer to own the design, scalability, and reli...
Location
Location
United States , San Jose
Salary
Salary:
150000.00 - 250000.00 USD / Year
figure.ai Logo
Figure
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience owning or architecting large-scale data platforms — ideally in EV, autonomous driving, or robotics fleet environments, where telemetry, sensor data, and system metrics are core to product decisions
  • Deep expertise in data engineering and architecture (data modeling, ETL orchestration, schema design, transformation frameworks)
  • Strong foundation in Python, SQL, and modern data stacks (dbt, Airflow, Kafka, Spark, BigQuery, ClickHouse, or Snowflake)
  • Experience building data quality, validation, and observability systems to detect regressions, schema drift, and missing data
  • Excellent communication skills — able to understand technical needs from domain experts (controls, perception, operations) and translate complex data patterns into clear, actionable insights for engineers and leadership
  • First-principles understanding of electrical and mechanical systems, including motors, actuators, encoders, and control loops
Job Responsibility
Job Responsibility
  • Architect and evolve Figure’s end-to-end platform data pipeline — from robot telemetry ingestion to warehouse transformation and visualization
  • Improve and maintain existing ETL/ELT pipelines for scalability, reliability, and observability
  • Detect and mitigate data regressions, schema drift, and missing data via validation and anomaly-detection frameworks
  • Identify and close gaps in data coverage, ensuring high-fidelity metrics coverage across releases and subsystems
  • Define the tech stack and architecture for the next generation of our data warehouse, transformation framework, and monitoring layer
  • Collaborate with robotics domain experts (controls, perception, Guardian, fall-prevention) to turn raw telemetry into structured metrics that drive engineering/business decisions
  • Partner with fleet management, operators, and leadership to design and communicate fleet-level KPIs, trends, and regressions in clear, actionable ways
  • Enable self-service access to clean, documented datasets for engineers
  • Develop tools and interfaces that make fleet data accessible and explorable for engineers without deep data backgrounds
  • Fulltime
Read More
Arrow Right

Staff Data Engineer

A VC-backed retail AI scale-up is expanding its engineering team and is looking ...
Location
Location
United States
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years in software development and data engineering with ownership of production-grade systems
  • Proven expertise in Spark (Databricks, EMR, or similar) and scaling it in production
  • Strong knowledge of distributed computing and modern data modeling approaches
  • Solid programming skills in Python, with an emphasis on clean, maintainable code
  • Hands-on experience with SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB, Cassandra)
  • Excellent communicator who can influence and partner across teams
Job Responsibility
Job Responsibility
  • Design and evolve distributed, cloud-based data infrastructure that supports both real-time and batch processing at scale
  • Build high-performance data pipelines that power analytics, AI/ML workloads, and integrations with third-party platforms
  • Champion data reliability, quality, and observability, introducing automation and monitoring across pipelines
  • Collaborate closely with engineering, product, and AI teams to deliver data solutions for business-critical initiatives
What we offer
What we offer
  • Fully remote
  • great equity
Read More
Arrow Right

Staff Data Engineer

A VC-backed retail AI scale-up is expanding its engineering team and is looking ...
Location
Location
United States
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years in software development and data engineering with ownership of production-grade systems
  • Proven expertise in Spark (Databricks, EMR, or similar) and scaling it in production
  • Strong knowledge of distributed computing and modern data modeling approaches
  • Solid programming skills in Python, with an emphasis on clean, maintainable code
  • Hands-on experience with SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB, Cassandra)
  • Excellent communicator who can influence and partner across teams
Job Responsibility
Job Responsibility
  • Design and evolve distributed, cloud-based data infrastructure that supports both real-time and batch processing at scale
  • Build high-performance data pipelines that power analytics, AI/ML workloads, and integrations with third-party platforms
  • Champion data reliability, quality, and observability, introducing automation and monitoring across pipelines
  • Collaborate closely with engineering, product, and AI teams to deliver data solutions for business-critical initiatives
What we offer
What we offer
  • Fully remote
  • great equity
Read More
Arrow Right

Staff Data Engineer

We are seeking a Staff Data Engineer to architect and lead our entire data infra...
Location
Location
United States , New York; San Francisco
Salary
Salary:
170000.00 - 210000.00 USD / Year
taskrabbit.com Logo
Taskrabbit
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7-10 years of experience in Data Engineering
  • Expertise in building and maintaining ELT data pipelines using modern tools such as dbt, Airflow, and Fivetran
  • Deep experience with cloud data warehouses such as Snowflake, BigQuery, or Redshift
  • Strong data modeling skills (e.g., dimensional modeling, star/snowflake schemas) to support both operational and analytical workloads
  • Proficient in SQL and at least one general-purpose programming language (e.g., Python, Java, or Scala)
  • Experience with streaming data platforms (e.g., Kafka, Kinesis, or equivalent) and real-time data processing patterns
  • Familiarity with infrastructure-as-code tools like Terraform and DevOps practices for managing data platform components
  • Hands-on experience with BI and semantic layer tools such as Looker, Mode, Tableau, or equivalent
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable, reliable data pipelines and infrastructure to support analytics, operations, and product use cases
  • Develop and evolve dbt models, semantic layers, and data marts that enable trustworthy, self-serve analytics across the business
  • Collaborate with non-technical stakeholders to deeply understand their business needs and translate them into well-defined metrics and analytical tools
  • Lead architectural decisions for our data platform, ensuring it is performant, maintainable, and aligned with future growth
  • Build and maintain data orchestration and transformation workflows using tools like Airflow, dbt, and Snowflake (or equivalent)
  • Champion data quality, documentation, and observability to ensure high trust in data across the organization
  • Mentor and guide other engineers and analysts, promoting best practices in both data engineering and analytics engineering disciplines
What we offer
What we offer
  • Employer-paid health insurance
  • 401k match with immediate vesting
  • Generous and flexible time off with 2 company-wide closure weeks
  • Taskrabbit product stipends
  • Wellness + productivity + education stipends
  • IKEA discounts
  • Reproductive health support
  • Fulltime
Read More
Arrow Right

Staff Product Manager, Data Platform

In this role as a Staff Product Manager, Data Platform, you will be responsible ...
Location
Location
United States , San Francisco
Salary
Salary:
207000.00 - 244000.00 USD / Year
https://checkr.com Logo
Checkr
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years as a PM, with recent experience working on a data or platform product
  • Strong understanding of platform engineering principles and technologies
  • Experience in understanding the data, legal, and technical nuances of supporting complex products
  • Exposure to machine learning / AI to solve complex data challenges, such as transformation, deduplication, and enrichment
  • Exposure to working in the identity space
  • Attention to detail is a must. You have an instinct for good platform design
  • An ability to manage project ambiguity, complexity, and interdependencies in an organized and structured way
  • Ability to be fly-high and fly-low in a high-performance environment
  • Excellent verbal and written communication skills
Job Responsibility
Job Responsibility
  • Collaborate with engineering and cross-functional teams to define the platform product vision and strategy
  • Conduct research and gather feedback from stakeholders to inform platform product decisions
  • Develop and prioritize the platform product roadmap, ensuring alignment with company goals and customer needs
  • Work closely with engineering, design, and data science teams to deliver high-quality platform features and improvements on time
  • Define and analyze key metrics to measure platform product success and drive continuous improvement
  • Communicate platform product updates and insights to stakeholders across the organization
  • Lead platform product launches and ensure successful adoption by internal teams and customers
What we offer
What we offer
  • A fast-paced and collaborative environment
  • Learning and development allowance
  • Competitive cash and equity compensation and opportunity for advancement
  • 100% medical, dental, and vision coverage
  • Up to $25K reimbursement for fertility, adoption, and parental planning services
  • Flexible PTO policy
  • Monthly wellness stipend, home office stipend
  • In-office perks such as lunch four times a week, a commuter stipend, and an abundance of snacks and beverages
  • Fulltime
Read More
Arrow Right

Software Engineer Staff - Data Scientist

Designs, develops, troubleshoots and debugs software programs for software enhan...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Masters or PhD in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
  • Proficiency in Python, R, SQL, or other programming languages for data analysis
  • Experience with data wrangling, manipulation, and visualization tools and libraries such as pandas, numpy, scikit-learn, matplotlib, seaborn, etc.
  • Knowledge of machine learning concepts and techniques such as supervised and unsupervised learning, regression, classification, clustering, dimensionality reduction, etc.
  • Familiarity with cloud computing platforms and services such as AWS, Azure, or Google Cloud
  • Strong analytical and problem-solving skills
Job Responsibility
Job Responsibility
  • Collect, clean, and transform data from various sources and formats for model training
  • Perform exploratory data analysis and visualization to understand patterns and trends
  • Build, test, and deploy predictive models and algorithms using appropriate tools and frameworks to a production cloud environment
  • Communicate findings and recommendations to stakeholders and clients using clear and compelling reports and presentations
  • Collaborate with other data scientists, engineers, and domain experts on cross-functional projects
What we offer
What we offer
  • Comprehensive suite of benefits that supports physical, financial and emotional wellbeing
  • Programs for personal and professional development
  • Inclusive environment that celebrates individual uniqueness
  • Fulltime
Read More
Arrow Right