CrawlJobs Logo

Software Engineer, Data Engineering

robinhood.com Logo

Robinhood

Location Icon

Location:
Canada , Toronto

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

124000.00 - 145000.00 CAD / Year

Job Description:

Join us in building the future of finance. Our mission is to democratize finance for all. An estimated $124 trillion of assets will be inherited by younger generations in the next two decades. The largest transfer of wealth in human history. If you’re ready to be at the epicenter of this historic cultural and financial shift, keep reading. About the team + role With a strong and growing engineering hub in Toronto, our teams in Canada are essential to building exceptional financial products and supporting our mission to democratize finance for all. Robinhood is a metrics driven company and data is foundational to all key decisions from growth strategy to product optimization to our day-to-day operations. We are looking for a Software Engineer, Data Engineering to build and maintain foundational datasets that will allow us to reliably and efficiently power decision making at Robinhood. These datasets include application events, database snapshots, and the derived datasets that describe and track Robinhood's key metrics across all products. You’ll partner closely with engineers, data scientists and business teams to power analytics, experimentation, and machine learning use cases. We are a fast-paced team in a fast growing company and this is a unique opportunity to help lay the foundation for reliable, impactful, data-driven decisions across the company for years to come.

Job Responsibility:

  • Help define and build key datasets across all Robinhood product areas. Lead the evolution of these datasets as use cases grow
  • Build scalable data pipelines using Python, Spark and Airflow to move data from different applications into our data lake
  • Partner with upstream engineering teams to enhance data generation patterns
  • Partner with data consumers across Robinhood to understand consumption patterns and design intuitive data models
  • Ideate and contribute to shared data engineering tooling and standards
  • Define and promote data engineering best practices across the company

Requirements:

  • 3+ years of professional experience building end-to-end data pipelines
  • Hands-on software engineering experience, with the ability to write production-level code in Python for user-facing applications, services, or systems (not just data scripting or automation)
  • Expert at building and maintaining large-scale data pipelines using open source frameworks (Spark, Flink, etc)
  • Strong SQL (Presto, Spark SQL, etc) skills
  • Experience solving problems across the data stack (Data Infrastructure, Analytics and Visualization platforms)
  • Expert collaborator with the ability to democratize data through actionable insights and solutions

Nice to have:

Passion for working and learning in a fast-growing company

What we offer:
  • bonus opportunities
  • equity
  • benefits

Additional Information:

Job Posted:
December 11, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Software Engineer, Data Engineering

Software Engineer - Data Engineering

Akuna Capital is a leading proprietary trading firm specializing in options mark...
Location
Location
United States , Chicago
Salary
Salary:
130000.00 USD / Year
akunacapital.com Logo
AKUNA CAPITAL
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS/MS/PhD in Computer Science, Engineering, Physics, Math, or equivalent technical field
  • 5+ years of professional experience developing software applications
  • Java/Scala experience required
  • Highly motivated and willing to take ownership of high-impact projects upon arrival
  • Prior hands-on experience with data platforms and technologies such as Delta Lake, Spark, Kubernetes, Kafka, ClickHouse, and/or Presto/Trino
  • Experience building large-scale batch and streaming pipelines with strict SLA and data quality requirements
  • Must possess excellent communication, analytical, and problem-solving skills
  • Recent hands-on experience with AWS Cloud development, deployment and monitoring necessary
  • Demonstrated experience working on an Agile team employing software engineering best practices, such as GitOps and CI/CD, to deliver complex software projects
  • The ability to react quickly and accurately to rapidly changing market conditions, including the ability to quickly and accurately respond and/or solve math and coding problems are essential functions of the role
Job Responsibility
Job Responsibility
  • Work within a growing Data Engineering division supporting the strategic role of data at Akuna
  • Drive the ongoing design and expansion of our data platform across a wide variety of data sources, supporting an array of streaming, operational and research workflows
  • Work closely with Trading, Quant, Technology & Business Operations teams throughout the firm to identify how data is produced and consumed, helping to define and deliver high impact projects
  • Build and deploy batch and streaming pipelines to collect and transform our rapidly growing Big Data set within our hybrid cloud architecture utilizing Kubernetes/EKS, Kafka/MSK and Databricks/Spark
  • Mentor junior engineers in software and data engineering best practices
  • Produce clean, well-tested, and documented code with a clear design to support mission critical applications
  • Build automated data validation test suites that ensure that data is processed and published in accordance with well-defined Service Level Agreements (SLA’s) pertaining to data quality, data availability and data correctness
  • Challenge the status quo and help push our organization forward, as we grow beyond the limits of our current tech stack
What we offer
What we offer
  • Discretionary performance bonus
  • Comprehensive benefits package that may encompass employer-paid medical, dental, vision, retirement contributions, paid time off, and other benefits
  • Fulltime
Read More
Arrow Right

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
Germany
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Own and maintain critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver to SDKs and connectors
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – opportunities to engage with colleagues at company-wide offsites
Read More
Arrow Right

Senior Software Engineer - Data Integration & JVM Ecosystem

The Connectors team is the bridge between ClickHouse and the entire data ecosyst...
Location
Location
United Kingdom
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software development experience focusing on building and delivering high-quality, data-intensive solutions
  • Proven experience with the internals of at least one of the following technologies: Apache Spark, Apache Flink, Kafka Connect, or Apache Beam
  • Experience developing or extending connectors, sinks, or sources for at least one big data processing framework such as Apache Spark, Flink, Beam, or Kafka Connect
  • Strong understanding of database fundamentals: SQL, data modeling, query optimization, and familiarity with OLAP/analytical databases
  • A track record of building scalable data integration systems (beyond simple ETL jobs)
  • Strong proficiency in Java and the JVM ecosystem, including deep knowledge of memory management, garbage collection tuning, and performance profiling
  • Solid experience with concurrent programming in Java, including threads, executors, and reactive or asynchronous patterns
  • Outstanding written and verbal communication skills to collaborate effectively within the team and across engineering functions
  • Understanding of JDBC, network protocols (TCP/IP, HTTP), and techniques for optimizing data throughput over the wire
  • Passion for open-source development
Job Responsibility
Job Responsibility
  • Serve as a core contributor, owning and maintaining critical parts of ClickHouse's Data engineering ecosystem
  • Own the full lifecycle of data framework integrations - from the core database driver that handles billions of records per second, to SDKs and connectors that make ClickHouse feel native in JVM-based applications
  • Build the foundation that thousands of Data engineers rely on for their most critical data workloads
  • Collaborate closely with the open-source community, internal teams, and enterprise users to ensure our JVM integrations set the standard for performance, reliability, and developer experience
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly. We currently operate in 20 countries
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – We believe in the power of in-person connection and offer opportunities to engage with colleagues at company-wide offsites
  • Fulltime
Read More
Arrow Right

Software Engineer (Data Exchange)

We are looking for passionate, curious, and resourceful Software Engineers to jo...
Location
Location
Thailand , Bangkok
Salary
Salary:
Not provided
earnin.com Logo
EarnIn
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of software development experience in a fast-paced environment
  • Bachelor's, Master’s, or PhD degree in computer science, computer engineering, or a related technical discipline, or equivalent industry experience
  • Proficient in at least one modern programming language, such as C#, Kotlin, JavaScript, and Python
  • Experience working with relational or NoSQL databases (e.g., PostgreSQL, DynamoDB, MySQL)
  • Familiarity with continuous integration and delivery tools
  • Experience writing and executing functional or integration tests
  • Strong communication skills and a collaborative mindset
  • Ability to learn quickly and thrive in a dynamic environment with a bias toward action and results
Job Responsibility
Job Responsibility
  • Contribute to the design and implementation of backend features that support EarnIn’s growth
  • Break down well-defined problems into clear, actionable tasks and deliver high-quality, maintainable code
  • Build and maintain APIs that support our client applications and backend systems
  • Write and improve automated tests to support continuous integration and development velocity
  • Collaborate closely with senior engineers, participating in code reviews and learning best practices in design and architecture
  • Help debug issues across services with guidance from more experienced engineers
  • Continuously learn new technologies and contribute to improving our backend systems
  • Care about writing reliable, production-quality code and learning how to build distributed systems and services
What we offer
What we offer
  • healthcare
  • internet/cell phone reimbursement
  • a learning and development stipend
  • opportunities to travel to our Mountain View HQ
Read More
Arrow Right

Software Engineer, Data Infrastructure

The Data Infrastructure team at Figma builds and operates the foundational platf...
Location
Location
United States , San Francisco; New York
Salary
Salary:
149000.00 - 350000.00 USD / Year
figma.com Logo
Figma
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of Software Engineering experience, specifically in backend or infrastructure engineering
  • Experience designing and building distributed data infrastructure at scale
  • Strong expertise in batch and streaming data processing technologies such as Spark, Flink, Kafka, or Airflow/Dagster
  • A proven track record of impact-driven problem-solving in a fast-paced environment
  • A strong sense of engineering excellence, with a focus on high-quality, reliable, and performant systems
  • Excellent technical communication skills, with experience working across both technical and non-technical counterparts
  • Experience mentoring and supporting engineers, fostering a culture of learning and technical excellence
Job Responsibility
Job Responsibility
  • Design and build large-scale distributed data systems that power analytics, AI/ML, and business intelligence
  • Develop batch and streaming solutions to ensure data is reliable, efficient, and scalable across the company
  • Manage data ingestion, movement, and processing through core platforms like Snowflake, our ML Datalake, and real-time streaming systems
  • Improve data reliability, consistency, and performance, ensuring high-quality data for engineering, research, and business stakeholders
  • Collaborate with AI researchers, data scientists, product engineers, and business teams to understand data needs and build scalable solutions
  • Drive technical decisions and best practices for data ingestion, orchestration, processing, and storage
What we offer
What we offer
  • equity
  • health, dental & vision
  • retirement with company contribution
  • parental leave & reproductive or family planning support
  • mental health & wellness benefits
  • generous PTO
  • company recharge days
  • a learning & development stipend
  • a work from home stipend
  • cell phone reimbursement
  • Fulltime
Read More
Arrow Right

Senior Software Engineer, Data Products

As a Senior Software Engineer, you will play a pivotal role in the development o...
Location
Location
United States , Los Angeles
Salary
Salary:
143000.00 - 180000.00 USD / Year
foxcorporation.com Logo
Fox Corporation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience working in Software Engineering, Data Science, ML Engineering
  • Strong background in live media streaming and handling VOD content
  • Expertise in working with live media streaming
  • Experience working with Vector Database
  • Strong understanding of generative AI technologies and their underlying mechanisms
  • Good grasp of distributed system design
  • Experience with TensorFlow, PyTorch etc.
  • REST or GraphQL API Design Experience
  • Proficient with building batch and streaming data pipelines on cloud platforms
Job Responsibility
Job Responsibility
  • Design and implement novel and scalable AI solutions for real business problems
  • Design and implement workflows to generate and manage assets for live streaming and VOD
  • Build workflow orchestrations that can be readily extended to perform new analyses
  • Prototype new approaches and productionize solutions at scale for hundreds of millions of active users
  • Maintain high-level craftsmanship while delivering meaningful results
  • Mentor junior engineers on the team
  • Collaborate with peers, engineering leadership, and product management
What we offer
What we offer
  • Annual discretionary bonus
  • Medical/dental/vision insurance
  • 401(k) plan
  • Paid time off
  • Fulltime
Read More
Arrow Right

Senior Software Engineer, Data Products

As a Senior Software Engineer, you will play a pivotal role in the development o...
Location
Location
United States , Los Angeles
Salary
Salary:
143000.00 - 180000.00 USD / Year
foxnews.com Logo
Fox News Media
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience working in Software Engineering, Data Science, ML Engineering
  • Strong background in live media streaming and handling VOD content
  • Expertise in working with live media streaming
  • Experience working with Vector Database
  • Strong understanding of generative AI technologies and their underlying mechanisms
  • Good grasp of distributed system design
  • Experience with TensorFlow, PyTorch etc.
  • REST or GraphQL API Design Experience
  • Proficient with building batch and streaming data pipelines on cloud platforms
Job Responsibility
Job Responsibility
  • Design and implement novel and scalable AI solutions for real business problems
  • Design and implement workflows to generate and manage assets for live streaming and VOD
  • Build workflow orchestrations that can be readily extended to perform new analyses
  • Prototype new approaches and productionize solutions at scale for hundreds of millions of active users
  • Maintain high-level craftsmanship while delivering meaningful results
  • Mentor junior engineers on the team
  • Collaborate with peers, engineering leadership, and product management
What we offer
What we offer
  • Annual discretionary bonus
  • Medical/dental/vision insurance
  • 401(k) plan
  • Paid time off
  • Fulltime
Read More
Arrow Right

Software Engineer 2 / Senior Software Engineer

We are looking for an experienced Software Engineers for our Bangalore location ...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
komprise.com Logo
Komprise, Inc.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Solid grasp of computer science fundamentals and especially data structures, algorithms, multi-threading
  • Ability to solve difficult problems with a simple elegant solution
  • Should have solid object-oriented programming background with impeccable design skills
  • Experience in developing management applications and performance management applications is ideal
  • Experience with object-based file systems and REST interfaces is a plus (e.g. Amazon S3, Azure, Google Cloud Service)
  • Should have a BE or higher in CS, EE, Math or related engineering or science field
  • At least 5+ years of experience in software deployment
  • Tech Stack: Java, Maven Virtualisation, SaaS, Github, Jira, Slack, Cloud Solutions and Hypervisors
Job Responsibility
Job Responsibility
  • Responsible for designing and developing features that powers Komprise data management platform to manage billions of files and petabytes of data
  • Responsible for designing of major components and systems of our product architecture, ensuring that Komprise data management platform is highly available and scalable
  • Responsible for writing performance code, evaluate feasibility, develop for quality and optimize for maintainability
  • Work in agile, customer focused and fast paced team with direct interaction with the customers
  • Responsible for analysing customer escalated issues and provide resolutions in a timely manner
  • Should be able to design and implement highly performant, scalable distributed systems
Read More
Arrow Right