This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Join us in building the future of finance. Our mission is to democratize finance for all. An estimated $124 trillion of assets will be inherited by younger generations in the next two decades. The largest transfer of wealth in human history. If you’re ready to be at the epicenter of this historic cultural and financial shift, keep reading. About the team + role With a strong and growing engineering hub in Toronto, our teams in Canada are essential to building exceptional financial products and supporting our mission to democratize finance for all. Robinhood is a metrics driven company and data is foundational to all key decisions from growth strategy to product optimization to our day-to-day operations. We are looking for a Software Engineer, Data Engineering to build and maintain foundational datasets that will allow us to reliably and efficiently power decision making at Robinhood. These datasets include application events, database snapshots, and the derived datasets that describe and track Robinhood's key metrics across all products. You’ll partner closely with engineers, data scientists and business teams to power analytics, experimentation, and machine learning use cases. We are a fast-paced team in a fast growing company and this is a unique opportunity to help lay the foundation for reliable, impactful, data-driven decisions across the company for years to come.
Job Responsibility:
Help define and build key datasets across all Robinhood product areas. Lead the evolution of these datasets as use cases grow
Build scalable data pipelines using Python, Spark and Airflow to move data from different applications into our data lake
Partner with upstream engineering teams to enhance data generation patterns
Partner with data consumers across Robinhood to understand consumption patterns and design intuitive data models
Ideate and contribute to shared data engineering tooling and standards
Define and promote data engineering best practices across the company
Requirements:
3+ years of professional experience building end-to-end data pipelines
Hands-on software engineering experience, with the ability to write production-level code in Python for user-facing applications, services, or systems (not just data scripting or automation)
Expert at building and maintaining large-scale data pipelines using open source frameworks (Spark, Flink, etc)
Strong SQL (Presto, Spark SQL, etc) skills
Experience solving problems across the data stack (Data Infrastructure, Analytics and Visualization platforms)
Expert collaborator with the ability to democratize data through actionable insights and solutions
Nice to have:
Passion for working and learning in a fast-growing company
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.