This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Patreon is looking for a Staff Software Engineer to support our mission. The Data Engineering team at Patreon builds pipelines, models, and tooling that power both customer-facing and internal data products. As a Staff Software Engineer on the team, you’ll architect and scale the data foundation that underpins our creator analytics product, discovery and safety ML systems, internal product analytics, executive reporting, experimentation, and company-wide decision-making.
Job Responsibility:
Design, build, and maintain the pipelines that power all data use cases
Develop intuitive, performant, and scalable data models (facts, dimensions, aggregations) that support product features, internal analytics, experimentation, and machine learning workloads
Implement robust batch and streaming pipelines using Spark, Python, and Airflow
Define and enforce standards for accuracy, completeness, lineage, and dependency management
Work with Product, Data Science, Infrastructure, Finance, Marketing, and Sales to turn ambiguous questions into well-scoped, high-impact data solutions
Pay down technical debt, improve automation, and drive best practices in data modeling, testing, and reliability
Mentor peers and help shape the future of Patreon’s data
Requirements:
6+ years of experience in software development
At least 2+ years of experience in building scalable, production-grade data pipelines
Expert-level proficiency in SQL and distributed data processing tools like Spark, Flink, Kafka Streams, or similar
Strong programming foundations in Python or similar language, with good software engineering design patterns and principles (testing, CI/CD, monitoring)
Expert in modern data lakes (eg: Delta Lake, Iceberg)
Familiar with data warehouses (eg: Snowflake, Redshift, BigQuery) and production data stores such as relational (eg: MySQL, PostgreSQL), object (eg: S3), key-value (eg: DynamoDB) and message queues (eg: Kinesis, Kafka)
Excellent collaboration and communication skills
Understanding of data modeling and metric design principles
Passionate about data quality, system reliability, and empowering others through well-crafted data assets
Highly motivated self-starter who thrives in a collaborative, fast-paced environment and takes pride in high-craft, high-impact work
Bachelor’s degree in Computer Science, Computer Engineering, or a related field, or the equivalent