This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Join us in building the future of finance. Our mission is to democratize finance for all. An estimated $124 trillion of assets will be inherited by younger generations in the next two decades. The largest transfer of wealth in human history. If you’re ready to be at the epicenter of this historic cultural and financial shift, keep reading. About the team + role With a strong and growing engineering hub in Toronto, our teams in Canada are essential to building exceptional financial products and supporting our mission to democratize finance for all! Robinhood is a metrics-driven company where data is foundational to every decision—from long-term strategy to daily operations. As a Software Developer on our Data Engineering team, you'll help build and maintain the core datasets that power analytics, experimentation, and machine learning across Robinhood. These foundational data assets include application events, database snapshots, and derived metrics that track the performance of our products. You’ll work closely with engineers, data scientists, and business teams to design scalable, intuitive pipelines and tooling. This is a unique opportunity to influence how data supports our decision-making for years to come.
Job Responsibility:
Help define and build key datasets across all Robinhood product areas. Lead the evolution of these datasets as use cases grow
Build scalable data pipelines using Python, Spark and Airflow to move data from different applications into our data lake
Partner with upstream engineering teams to enhance data generation patterns
Partner with data consumers across Robinhood to understand consumption patterns and design intuitive data models
Ideate and contribute to shared data engineering tooling and standards
Define and promote data engineering best practices across the company
Requirements:
3+ years of professional experience building end-to-end data pipelines
Hands-on software engineering experience, with the ability to write production-level code in Python for user-facing applications, services, or systems (not just data scripting or automation)
Expert at building and maintaining large-scale data pipelines using open source frameworks (Spark, Flink, etc)
Strong SQL (Presto, Spark SQL, etc) skills
Experience solving problems across the data stack (Data Infrastructure, Analytics and Visualization platforms)
Expert collaborator with the ability to democratize data through actionable insights and solutions