This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Data Engineer, your job is to facilitate data analytics and measurement at scale at Replit. You'll work with product and business teams to help build data pipelines and transformations to enable us to understand and measure product usage. You'll also work to make our data scientists and analysts -- and the business decisions that depend on them-- more powerful and efficient.
Job Responsibility:
Design, build, and maintain scalable data pipelines that power analytics and data-driven decision-making across Replit
Develop ETL/ELT workflows using modern data stack tools and transform raw data into clean, reliable datasets that enable self-service analytics
Partner with teams across the company to understand data needs, deliver robust solutions, and implement data quality monitoring to ensure accuracy and reliability
Build unified data models combining product usage, billing, and customer data to enable cohort analysis and retention tracking
Design real-time pipelines that surface key metrics and automated data quality checks to catch inconsistencies before they impact downstream users
Create dimensional models that enable flexible analysis of user behavior, feature adoption, and conversion funnels
Requirements:
5+ years of experience building production data pipelines
Strong SQL skills and experience designing data models
Experience with modern data transformation tools (dbt preferred)
Proficiency in Python
Hands-on experience with cloud data warehouses (BigQuery, Snowflake, Redshift)
Understanding of data warehouse design principles
Ability to communicate effectively with both technical and non-technical stakeholders
Nice to have:
Experience with modern data stack tools (dbt, Fivetran, Segment, HEX, Databricks, Amplitude)
Background in high-growth SaaS or PLG companies
Familiarity with event-based analytics platforms, data visualization tools, and software engineering best practices
Experience with real-time data processing, reverse ETL tools, or developer tools and collaborative coding environments
Knowledge of data governance frameworks or machine learning pipelines and feature engineering