CrawlJobs Logo

Staff Data Engineer

tonal.com Logo

Tonal

Location Icon

Location:
United States , San Francisco

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

200000.00 - 235000.00 USD / Year

Job Description:

As a Staff Data Engineer, you will shape the backbone of Tonal’s data platform. You’ll design and scale systems that bring together massive volumes of workout, sensor, and health-related data while ensuring security, reliability, and trust. This role requires a deep understanding of compliance and security standards and the ability to build infrastructure that protects sensitive information while fueling product innovation, AI, and analytics.

Job Responsibility:

  • Architect secure and scalable data systems that support Tonal’s growth and meet regulatory standards
  • Build and optimize data models and pipelines across diverse sources: sensors, workouts, health integrations, CRM, payments, and content
  • Establish controls for access, encryption, anonymization, monitoring, and auditability
  • Define and enforce best practices for managing sensitive data, including PHI and PII
  • Collaborate with teams across Product, Engineering, Sports Science, and Healthcare to translate needs into compliant solutions
  • Conduct risk assessments and implement safeguards guided by NIST frameworks
  • Support SOC 2 audits by documenting and demonstrating effective security controls
  • Mentor engineers and scientists, setting high standards for secure data engineering
  • Continuously evolve the platform, introducing new tools and frameworks to balance innovation with strong regulatory posture

Requirements:

  • 8+ years of experience in data engineering, or 6+ years with a Master’s degree (or equivalent)
  • Strong skills in SQL, Python, and distributed data processing (Spark, Databricks, or similar)
  • Experience building pipelines with DBT, Airflow, Fivetran, or related tools
  • Background in data modeling and warehousing with systems like Snowflake, Databricks, or Redshift
  • Hands-on experience working with regulated environments and sensitive data
  • Familiarity with frameworks such as HIPAA, SOC 2, and NIST for security and compliance
  • Skilled in access control design, audit logging, encryption, and governance
  • Excellent communicator who can explain complex tradeoffs to both technical and non-technical audiences
  • Known for technical leadership and mentoring, raising the bar for engineering quality

Nice to have:

  • Experience with fitness, healthcare, IoT, or sensor data
  • Knowledge of privacy-preserving techniques (k-anonymity, l-diversity, differential privacy)
  • Exposure to production ML/AI pipelines involving sensitive data
  • Background in connected fitness, digital health, or regulated healthcare products
What we offer:
  • Offers Equity
  • health insurance
  • retirement savings benefits
  • life insurance and disability benefits
  • flexible paid time off
  • parental leave
  • and other additional benefits (location dependent)

Additional Information:

Job Posted:
February 18, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Staff Data Engineer

Staff Data Engineer

Checkr is hiring an experienced Staff Data Engineer to join their Data Platform ...
Location
Location
United States , San Francisco; Denver
Salary
Salary:
166000.00 - 230000.00 USD / Year
https://checkr.com Logo
Checkr
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of designing, implementing and delivering highly scalable and performant data platform
  • experience building large-scale (100s of Terabytes and Petabytes) data processing pipelines - batch and stream
  • experience with ETL/ELT, stream and batch processing of data at scale
  • expert level proficiency in PySpark, Python, and SQL
  • expertise in data modeling, relational databases, NoSQL (such as MongoDB) data stores
  • experience with big data technologies such as Kafka, Spark, Iceberg, Datalake, and AWS stack (EKS, EMR, Serverless, Glue, Athena, S3, etc.)
  • an understanding of Graph and Vector data stores (preferred)
  • knowledge of security best practices and data privacy concerns
  • strong problem-solving skills and attention to detail
  • experience/knowledge of data processing platforms such as Databricks or Snowflake.
Job Responsibility
Job Responsibility
  • Architect, design, lead and build end-to-end performant, reliable, scalable data platform
  • monitor, investigate, triage, and resolve production issues as they arise for services owned by the team
  • mentor, guide and work with junior engineers to deliver complex and next-generation features
  • partner with engineering, product, design, and other stakeholders in designing and architecting new features
  • create and maintain data pipelines and foundational datasets to support product/business needs
  • experiment with rapid MVPs and encourage validation of customer needs
  • design and build database architectures with massive and complex data
  • develop audits for data quality at scale
  • create scalable dashboards and reports to support business objectives and enable data-driven decision-making
  • troubleshoot and resolve complex issues in production environments.
What we offer
What we offer
  • A fast-paced and collaborative environment
  • learning and development allowance
  • competitive cash and equity compensation and opportunity for advancement
  • 100% medical, dental, and vision coverage
  • up to $25K reimbursement for fertility, adoption, and parental planning services
  • flexible PTO policy
  • monthly wellness stipend
  • home office stipend
  • in-office perks such as lunch four times a week, a commuter stipend, and an abundance of snacks and beverages.
  • Fulltime
Read More
Arrow Right

Staff Data Engineer

We are looking for a Staff Data Engineer to take ownership of Blinq’s data engin...
Location
Location
Australia , Sydney; Melbourne
Salary
Salary:
Not provided
blinq.me Logo
Blinq Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Extensive experience with SQL, data modeling tools like Dataform, and programming languages such as Python or R and data system architecture designs
  • Hands-on expertise with a modern event data stack and familiarity with tools such as Segment, Amplitude, BigQuery, Looker Studio, Google Analytics, SSGTM, CDP and AWS/GCP
  • A strong vision for building a scalable, reliable, and cutting-edge analytics platform, with the ability to develop a roadmap to achieve this
  • A solid grounding in mathematics, statistics, and data visualisation
  • Experience in MarTech and working with data systems that support machine learning workflows is desirable
  • High proficiency in A/B testing and event system design is also highly desirable
Job Responsibility
Job Responsibility
  • Building and optimising data pipelines which are scalable to support the collection, transformation, and loading (ETL) of data into databases, warehouses or lakes
  • Ensuring data quality and accuracy by implementing robust validation and observability processes
  • Collaborating closely with Product, Marketing, Sales and Engineering teams to align data systems with business goals and measure success effectively
  • Driving cross-functional efforts to address data integrity, reporting, and insights, while remediating inconsistencies and gaps
  • Documenting data processes, architecture, and workflows to ensure clarity and continuity across teams
What we offer
What we offer
  • Equity & ownership
  • Competitive salary & growth path
  • Generous paid time off: At least 20 days fully disconnect each year, with a flexible policy beyond that
  • Parental leave that grows with you: 12 to 26 weeks full pay, based on tenure
  • Free food: Enjoy daily breakfast and lunch at some of our offices, plus an always-stocked snack bar
  • Fulltime
Read More
Arrow Right

Staff Data Engineer

We’re looking for a Staff Data Engineer to own the design, scalability, and reli...
Location
Location
United States , San Jose
Salary
Salary:
150000.00 - 250000.00 USD / Year
figure.ai Logo
Figure
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience owning or architecting large-scale data platforms — ideally in EV, autonomous driving, or robotics fleet environments, where telemetry, sensor data, and system metrics are core to product decisions
  • Deep expertise in data engineering and architecture (data modeling, ETL orchestration, schema design, transformation frameworks)
  • Strong foundation in Python, SQL, and modern data stacks (dbt, Airflow, Kafka, Spark, BigQuery, ClickHouse, or Snowflake)
  • Experience building data quality, validation, and observability systems to detect regressions, schema drift, and missing data
  • Excellent communication skills — able to understand technical needs from domain experts (controls, perception, operations) and translate complex data patterns into clear, actionable insights for engineers and leadership
  • First-principles understanding of electrical and mechanical systems, including motors, actuators, encoders, and control loops
Job Responsibility
Job Responsibility
  • Architect and evolve Figure’s end-to-end platform data pipeline — from robot telemetry ingestion to warehouse transformation and visualization
  • Improve and maintain existing ETL/ELT pipelines for scalability, reliability, and observability
  • Detect and mitigate data regressions, schema drift, and missing data via validation and anomaly-detection frameworks
  • Identify and close gaps in data coverage, ensuring high-fidelity metrics coverage across releases and subsystems
  • Define the tech stack and architecture for the next generation of our data warehouse, transformation framework, and monitoring layer
  • Collaborate with robotics domain experts (controls, perception, Guardian, fall-prevention) to turn raw telemetry into structured metrics that drive engineering/business decisions
  • Partner with fleet management, operators, and leadership to design and communicate fleet-level KPIs, trends, and regressions in clear, actionable ways
  • Enable self-service access to clean, documented datasets for engineers
  • Develop tools and interfaces that make fleet data accessible and explorable for engineers without deep data backgrounds
  • Fulltime
Read More
Arrow Right

Staff Data Engineer

A VC-backed retail AI scale-up is expanding its engineering team and is looking ...
Location
Location
United States
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years in software development and data engineering with ownership of production-grade systems
  • Proven expertise in Spark (Databricks, EMR, or similar) and scaling it in production
  • Strong knowledge of distributed computing and modern data modeling approaches
  • Solid programming skills in Python, with an emphasis on clean, maintainable code
  • Hands-on experience with SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB, Cassandra)
  • Excellent communicator who can influence and partner across teams
Job Responsibility
Job Responsibility
  • Design and evolve distributed, cloud-based data infrastructure that supports both real-time and batch processing at scale
  • Build high-performance data pipelines that power analytics, AI/ML workloads, and integrations with third-party platforms
  • Champion data reliability, quality, and observability, introducing automation and monitoring across pipelines
  • Collaborate closely with engineering, product, and AI teams to deliver data solutions for business-critical initiatives
What we offer
What we offer
  • Fully remote
  • great equity
Read More
Arrow Right

Staff Data Engineer

A VC-backed retail AI scale-up is expanding its engineering team and is looking ...
Location
Location
United States
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years in software development and data engineering with ownership of production-grade systems
  • Proven expertise in Spark (Databricks, EMR, or similar) and scaling it in production
  • Strong knowledge of distributed computing and modern data modeling approaches
  • Solid programming skills in Python, with an emphasis on clean, maintainable code
  • Hands-on experience with SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB, Cassandra)
  • Excellent communicator who can influence and partner across teams
Job Responsibility
Job Responsibility
  • Design and evolve distributed, cloud-based data infrastructure that supports both real-time and batch processing at scale
  • Build high-performance data pipelines that power analytics, AI/ML workloads, and integrations with third-party platforms
  • Champion data reliability, quality, and observability, introducing automation and monitoring across pipelines
  • Collaborate closely with engineering, product, and AI teams to deliver data solutions for business-critical initiatives
What we offer
What we offer
  • Fully remote
  • great equity
Read More
Arrow Right

Staff Data Engineer

We are seeking a Staff Data Engineer to architect and lead our entire data infra...
Location
Location
United States , New York; San Francisco
Salary
Salary:
170000.00 - 210000.00 USD / Year
taskrabbit.com Logo
Taskrabbit
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7-10 years of experience in Data Engineering
  • Expertise in building and maintaining ELT data pipelines using modern tools such as dbt, Airflow, and Fivetran
  • Deep experience with cloud data warehouses such as Snowflake, BigQuery, or Redshift
  • Strong data modeling skills (e.g., dimensional modeling, star/snowflake schemas) to support both operational and analytical workloads
  • Proficient in SQL and at least one general-purpose programming language (e.g., Python, Java, or Scala)
  • Experience with streaming data platforms (e.g., Kafka, Kinesis, or equivalent) and real-time data processing patterns
  • Familiarity with infrastructure-as-code tools like Terraform and DevOps practices for managing data platform components
  • Hands-on experience with BI and semantic layer tools such as Looker, Mode, Tableau, or equivalent
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable, reliable data pipelines and infrastructure to support analytics, operations, and product use cases
  • Develop and evolve dbt models, semantic layers, and data marts that enable trustworthy, self-serve analytics across the business
  • Collaborate with non-technical stakeholders to deeply understand their business needs and translate them into well-defined metrics and analytical tools
  • Lead architectural decisions for our data platform, ensuring it is performant, maintainable, and aligned with future growth
  • Build and maintain data orchestration and transformation workflows using tools like Airflow, dbt, and Snowflake (or equivalent)
  • Champion data quality, documentation, and observability to ensure high trust in data across the organization
  • Mentor and guide other engineers and analysts, promoting best practices in both data engineering and analytics engineering disciplines
What we offer
What we offer
  • Employer-paid health insurance
  • 401k match with immediate vesting
  • Generous and flexible time off with 2 company-wide closure weeks
  • Taskrabbit product stipends
  • Wellness + productivity + education stipends
  • IKEA discounts
  • Reproductive health support
  • Fulltime
Read More
Arrow Right

Software Engineer Staff - Data Scientist

Designs, develops, troubleshoots and debugs software programs for software enhan...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Masters or PhD in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
  • Proficiency in Python, R, SQL, or other programming languages for data analysis
  • Experience with data wrangling, manipulation, and visualization tools and libraries such as pandas, numpy, scikit-learn, matplotlib, seaborn, etc.
  • Knowledge of machine learning concepts and techniques such as supervised and unsupervised learning, regression, classification, clustering, dimensionality reduction, etc.
  • Familiarity with cloud computing platforms and services such as AWS, Azure, or Google Cloud
  • Strong analytical and problem-solving skills
Job Responsibility
Job Responsibility
  • Collect, clean, and transform data from various sources and formats for model training
  • Perform exploratory data analysis and visualization to understand patterns and trends
  • Build, test, and deploy predictive models and algorithms using appropriate tools and frameworks to a production cloud environment
  • Communicate findings and recommendations to stakeholders and clients using clear and compelling reports and presentations
  • Collaborate with other data scientists, engineers, and domain experts on cross-functional projects
What we offer
What we offer
  • Comprehensive suite of benefits that supports physical, financial and emotional wellbeing
  • Programs for personal and professional development
  • Inclusive environment that celebrates individual uniqueness
  • Fulltime
Read More
Arrow Right

Staff Software Engineer - Cloud Data Storage

Cloud Data Store (CDS) owns the storage, retrieval, and lifecycle of all workflo...
Location
Location
United States
Salary
Salary:
190000.00 - 265000.00 USD / Year
temporal.io Logo
Temporal
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5 or more years of experience as an 'Arranger' and/or 'Builder/Enhancer' of highly scalable distributed systems
  • Solid computer science fundamentals in distributed systems concepts including multi-threading and concurrency
  • Experience writing concurrent code in production with languages like Go or Java or other applicable languages with skill level as 'high end of Intermediate' and/or 'Advanced' or 'Expert' levels
  • Experience building and running services on AWS
Job Responsibility
Job Responsibility
  • Design & build distributed data systems – craft APIs, schemas, and replication paths that keep petabytes of workflow history durable and query-able. Clearly document design choices and operational knowledge to successfully deploy and run service with those features
  • Drive reliability & performance – own SLOs, create chaos-test plans, profile hot paths, and lead incident reviews
  • Technical leadership – break down roadmap epics, mentor mid-level engineers, steward design docs through RFC
  • Cross-team collaboration – partner with the Server, Cloud, and DX teams to land features end-to-end
What we offer
What we offer
  • Unlimited PTO, 12 Holidays + 2 Floating Holidays
  • 100% Premiums Coverage for Medical, Dental, and Vision
  • AD&D, LT & ST Disability, and Life Insurance (Standard & Supplemental Available)
  • Empower 401K Plan
  • Additional Perks for Learning & Development, Lifestyle Spending, In-Home Office Setup, Professional Memberships, WFH Meals, Internet Stipend and more
  • $3,600 / Year Work from Home Meals
  • $1,500 / Year Career Development & Learning
  • $1,200 / Year Lifestyle Spending Account
  • $1,000 / Year In-Home Office Setup (In addition to Temporal issued equipment)
  • $500 / Year Professional Memberships
  • Fulltime
Read More
Arrow Right