This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a senior software engineer to help grow our data platform as Strava scales. Data is a critical driver of decisions that benefit both our athletes and the business. The Strava Data Platform serves as the foundation for this decision-making process, supporting every part of the company through infrastructure enabling rich data analysis. We strive to build a platform that enables self-service for a variety of use cases while maintaining strong governance and reliability.
Job Responsibility:
Collaborate with people across teams and functions that hold deep curiosity for data
Work with hefty data systems at the global scale of Strava, supporting functions including analytics, AI/ML, engineering, and finance
Help strengthen our infrastructure as we grow
Deliver value more through software, leaning into tooling and automation rather than repetitive toil
Grow your expertise in the steadily evolving technologies and ecosystem of data
Building scalable software solutions to existing data problems utilizing modern data technologies
Writing high quality and reliable code that supports our end user experience
Understanding that data security and privacy is of utmost importance
Holding empathy for the users of our platform to truly understand the challenges we address for them
Fostering an inclusive and motivating team culture to help everyone achieve their best
Requirements:
3+ years of experience developing data-intensive software using languages like Python, Scala, Java, Go, or Ruby
Ability to evaluate and adopt new technologies as business needs evolve
Comfortable reading and reasoning about SQL queries in data pipeline contexts (e.g., dbt models)
Understanding how transformations impact downstream systems
Hands-on experience working with distributed data processing tools (e.g., Spark, Flink, Kafka) on production datasets
Understanding of tradeoffs and appropriate use cases for data processing tools
Experience building or maintaining data pipelines using cloud data warehouses (e.g., Snowflake, BigQuery, Redshift), Data lakes (e.g., Iceberg, Hudi) or similar solutions
Understanding of performance optimization and cost considerations
Understand the underlying infrastructure needed to serve production data platforms (e.g., Kubernetes, AWS, GCP, Azure)
Experience deploying and managing data infrastructure components like clusters, storage systems, and compute resources