CrawlJobs Logo

Software Engineer, Data Governance

robinhood.com Logo

Robinhood

Location Icon

Location:
United States , Bellevue

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

166000.00 - 195000.00 USD / Year

Job Description:

Join us in building the future of finance. Our mission is to democratize finance for all. The Data Governance team’s mission is to protect and responsibly manage customer data across Robinhood, ensuring trust, transparency, and compliance in every data flow. We enable safe and compliant use of data across AI/ML systems and core infrastructure, embedding privacy and governance into how products are built. As a Software Engineer on this team, you’ll develop backend systems that monitor and govern how data is used across our platform. You’ll work on high-impact initiatives like GDPR readiness, AI/ML governance, and cross-functional enforcement of data compliance. This is a critical role, enabling safe AI innovation at Robinhood!

Job Responsibility:

  • Design and build backend services and automation frameworks that instrument, govern, and enforce data usage, retention, and access policies across Robinhood’s online and analytical systems
  • Partner with AI/ML, Risk, and Privacy teams to operationalize governance and compliance in emerging AI systems, including ML Models and Agentic AI workflows
  • Enable governance Robinhood’s offline analytical systems and new data infrastructure workflows
  • Build internal tools and automation that strengthen our enterprise data governance posture by enabling auditability, data integrity, and privacy respecting design across our infrastructure
  • Own end-to-end delivery of governance solutions from design and prototyping to production deployment driving measurable impact in data reliability, compliance readiness, and trust

Requirements:

  • Strong coding and problem-solving skills with proficiency in Python or Go (or similar languages)
  • Experience with server-side frameworks such as Django or GoLang
  • Familiarity with Kubernetes, AWS, and cloud-native development
  • Excellent communication skills with a proven ability to work cross-functionally
  • Curiosity and drive to navigate complex systems, regulatory requirements, and fast-changing technology
What we offer:
  • Performance-driven compensation with multipliers for outsized impact, bonus programs, equity ownership, and 401(k) matching
  • 100% paid health insurance for employees with 90% coverage for dependents
  • Lifestyle wallet — a highly flexible benefits spending account for wellness, learning, and more
  • Employer-paid life & disability insurance, fertility benefits, and mental health benefits
  • Time off to recharge including company holidays, paid time off, sick time, parental leave, and more
  • Exceptional office experience with catered meals, events, and comfortable workspaces

Additional Information:

Job Posted:
February 18, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Software Engineer, Data Governance

Data Governance Engineer

The role focuses on deploying and managing enterprise-scale Data Governance prac...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of Data Governance and Data Engineering experience, with significant exposure to enabling Data availability, data discovery, quality & reliability, with appropriate security & access controls in enterprise-scale ecosystem
  • First level university degree
  • Experience working with Data governance & metadata management tools (Collibra, Databricks Unity Catalog, Atlan, etc.)
  • Subject matter expertise of consent management concepts and tools
  • Demonstrated knowledge of research methodology and the ability to manage complex data requests
  • Excellent analytical thinking, technical analysis, and data manipulation skills
  • Proven track record of development of SQL SSIS packages with ETL flow
  • Experience with AI application deployment governance a plus
  • Technologies such as MS SQL Server, Databricks, Hadoop, SAP S4/HANA
  • Experience with SQL databases and building SSIS packages
Job Responsibility
Job Responsibility
  • Drive the design and development of Data Dictionary, Lineage, Data Quality, Security & Access Control for Business-relevant data subjects & reports across business domains
  • Engage with the business users community to enable ease of Data Discovery and build trust in the data through Data Quality & Reliability monitoring with key metrics & SLAs defined
  • Supports the development and sustaining of Data subjects in the Database layer to enable BI dashboards and AI solutions
  • Drives the engagement and alignment with the HPE IT/CDO team on Governance initiatives, including partnering with functional teams across the business
  • Test, validate and assure the quality of complex AI-powered product features
  • Partner with a highly motivated and talented set of colleagues
  • Be a motivated, self-starter who can operate with minimal handholding
  • Collaborate across teams and time zones, demonstrating flexibility and accountability.
What we offer
What we offer
  • Comprehensive suite of benefits supporting physical, financial and emotional wellbeing
  • Specific programs to help achieve career goals
  • Comprehensive inclusion and flexibility to manage work and personal needs.
  • Fulltime
Read More
Arrow Right

Senior Software Engineer - Data Protection

LufCo is seeking a Senior Software Engineer with a focus on Data Protection. Thi...
Location
Location
United States , Annapolis Junction
Salary
Salary:
170000.00 - 245000.00 USD / Year
lufburrow.com Logo
LufCo
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor of Science degree in Software Engineering, Computer Science, Information Systems, or other related field
  • 4 years of relevant work experience may be substituted for a B.S. degree
  • Fourteen (14) or more years experience as a Software Engineer in programs and contracts of similar scope
  • Languages: Java (for both front-end (Swing) and back-end (servlets), Javascript (vanilla/JQuery),Shell Scripting (BASH), PL/SQL (Oracle)
  • Frameworks: React and Spring/Spring Boot
  • OS: Linux and Windows
  • COTs: AEM (Adobe)
  • Servers: JBoss 7.x and Tomcat
  • Active TS/SCI with Polygraph clearance
Job Responsibility
Job Responsibility
  • Drive next generation Data Protection forward utilizing commercial and government best practices for ensuring secure encryption solutions
  • Planning, implementation, and evolution of Data Protection sets for evaluation and analysis as part of existing system modernization efforts
  • Ability to see impacts of system changes at scale, minimizing technical debt and critical thinking related to strategic moves regarding Identity, Credentialing, and Access Management Solutions
  • Provide fundamental knowledge on applying technologies like containerization to legacy physical workloads, the ability to identify automation improvements, and the ability to communicate pros/cons as part of the technical decision making process
  • Demonstrate a high level of familiarity with software patterns and modern design methodology
  • Software development on Linux based platforms
  • Software planning to include development planning, build planning, and sprint planning
  • Develop software to meet cybersecurity related software requirements and constraints
  • Advocate for automation in all aspects of the system (build, deployment, test, updating, and monitoring)
  • Perform requirements analysis, refinement, testing, troubleshooting, deployment, and push secure access solutions forward to support the customer
What we offer
What we offer
  • Competitive salary
  • generous PTO
  • health/dental/vision insurance
  • 401K matching
  • tuition reimbursement
  • Paid Time Off
  • 401K Contribution and Employer Match Contributions
  • Medical, Dental, and Vision Coverage
  • Impactful Work
  • Cutting-Edge Technology
  • Fulltime
Read More
Arrow Right

Senior Software Engineer, Data Platform

We are looking for a foundational member of the Data Team to enable Skydio to ma...
Location
Location
United States , San Mateo
Salary
Salary:
180000.00 - 240000.00 USD / Year
skydio.com Logo
Skydio
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience
  • 2+ years in software engineering
  • 2+ years in data engineering with a bias towards getting your hands dirty
  • Deep experience with Databricks building pipelines, managing datasets, and developing dashboards or analytical applications
  • Proven track record of operating scalable data platforms, defining company-wide patterns that ensure reliability, performance, and cost effectiveness
  • Proficiency in SQL and at least one modern programming language (we use Python)
  • Comfort working across the full data stack — from ingestion and transformation to orchestration and visualization
  • Strong communication skills, with the ability to collaborate effectively across all levels and functions
  • Demonstrated ability to lead technical direction, mentor teammates, and promote engineering excellence and best practices across the organization
  • Familiarity with AI-assisted data workflows, including tools that accelerate data transformations or enable natural-language interfaces for analytics
Job Responsibility
Job Responsibility
  • Design and scale the data infrastructure that ingests live telemetry from tens of thousands of autonomous drones
  • Build and evolve our Databricks and Palantir Foundry environments to empower every Skydian to query data, define jobs, and build dashboards
  • Develop data systems that make our products truly data-driven — from predictive analytics that anticipate hardware failures, to 3D connectivity mapping, to in-depth flight telemetry analysis
  • Create and integrate AI-powered tools for data analysis, transformation, and pipeline generation
  • Champion a data-driven culture by defining and enforcing best practices for data quality, lineage, and governance
  • Collaborate with autonomy, manufacturing, and operations teams to unify how data flows across the company
  • Lead and mentor data engineers, analysts, and stakeholders across Skydio
  • Ensure platform reliability by implementing robust monitoring, observability, and contributing to the on-call rotation for critical data systems
What we offer
What we offer
  • Equity in the form of stock options
  • Comprehensive benefits packages
  • Relocation assistance may also be provided for eligible roles
  • Paid vacation time
  • Sick leave
  • Holiday pay
  • 401K savings plan
  • Fulltime
Read More
Arrow Right

Principal Data Engineer

PointClickCare is searching for a Principal Data Engineer who will contribute to...
Location
Location
United States
Salary
Salary:
183200.00 - 203500.00 USD / Year
pointclickcare.com Logo
PointClickCare
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Principal Data Engineer with at least 10 years of professional experience in software or data engineering, including a minimum of 4 years focused on streaming and real-time data systems
  • Proven experience driving technical direction and mentoring engineers while delivering complex, high-scale solutions as a hands-on contributor
  • Deep expertise in streaming and real-time data technologies, including frameworks such as Apache Kafka, Flink, and Spark Streaming
  • Strong understanding of event-driven architectures and distributed systems, with hands-on experience implementing resilient, low-latency pipelines
  • Practical experience with cloud platforms (AWS, Azure, or GCP) and containerized deployments for data workloads
  • Fluency in data quality practices and CI/CD integration, including schema management, automated testing, and validation frameworks (e.g., dbt, Great Expectations)
  • Operational excellence in observability, with experience implementing metrics, logging, tracing, and alerting for data pipelines using modern tools
  • Solid foundation in data governance and performance optimization, ensuring reliability and scalability across batch and streaming environments
  • Experience with Lakehouse architectures and related technologies, including Databricks, Azure ADLS Gen2, and Apache Hudi
  • Strong collaboration and communication skills, with the ability to influence stakeholders and evangelize modern data practices within your team and across the organization
Job Responsibility
Job Responsibility
  • Lead and guide the design and implementation of scalable streaming data pipelines
  • Engineer and optimize real-time data solutions using frameworks like Apache Kafka, Flink, Spark Streaming
  • Collaborate cross-functionally with product, analytics, and AI teams to ensure data is a strategic asset
  • Advance ongoing modernization efforts, deepening adoption of event-driven architectures and cloud-native technologies
  • Drive adoption of best practices in data governance, observability, and performance tuning for streaming workloads
  • Embed data quality in processing pipelines by defining schema contracts, implementing transformation tests and data assertions, enforcing backward-compatible schema evolution, and automating checks for freshness, completeness, and accuracy across batch and streaming paths before production deployment
  • Establish robust observability for data pipelines by implementing metrics, logging, and distributed tracing for streaming jobs, defining SLAs and SLOs for latency and throughput, and integrating alerting and dashboards to enable proactive monitoring and rapid incident response
  • Foster a culture of quality through peer reviews, providing constructive feedback and seeking input on your own work
What we offer
What we offer
  • Benefits starting from Day 1!
  • Retirement Plan Matching
  • Flexible Paid Time Off
  • Wellness Support Programs and Resources
  • Parental & Caregiver Leaves
  • Fertility & Adoption Support
  • Continuous Development Support Program
  • Employee Assistance Program
  • Allyship and Inclusion Communities
  • Employee Recognition … and more!
  • Fulltime
Read More
Arrow Right

Data Governance Program Manager

Digital Transformation PMO - Data Governance Program Manager. This role is accou...
Location
Location
Singapore , Singapore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in IT, Computer Science, Software Engineering, Data, Business Analytics or equivalent
  • Minimum 10 years of experience in data or corporate governance setup and operationalization
  • Experience in SAP MDG, S4 HANA roll out, Data Harmonization and Data Cleansing
  • Experience in Material, Vendor, BOM master Data
  • Certification in industry standard data architecture discipline or similar (e.g. DCAM, DAMA)
  • Experience in operating under Data Office organization
  • Expert in consulting and helping business to develop data quality business rules, data catalogue, business glossary
  • Ability to develop, implement & optimize complex data governance solution and issues
  • Experience in Data, Privacy, Protection implementations & operationalization
  • Experience in Big Data and associated platform / technology knowledge
Job Responsibility
Job Responsibility
  • Establish, develop and optimize Data Governance Framework, Policy, Process & associated business program / solutions implementations for Global Operation team
  • Define data governance, data management frameworks and solutions together with Chief Data Officer team, IT and Global Operation functional groups
  • Support business units in digital transformation journey with data governance
  • Enable Data Governance framework including managing the objectives, approach, processes, policies and procedures around data governance
  • Build robust and scalable data governance ecosystem to support business needs
  • Define data governance operational processes (e.g., data quality measurement, metadata management) in accordance with policies and standards
  • Provide expert consultation to business units to establish and maintain data policies and standards that enable use-cases
  • Provide expert consultation to assist business units in identify and setup of critical data elements including the setup of data lineage, data catalogue and data quality
  • Work collaboratively & consultatively with chief data officer, business units, IT to deliver enterprise objectives around data governance
  • Identify, design, and implement internal process & framework improvements: automating manual operational processes and control for data governance implementation
What we offer
What we offer
  • Health & Wellbeing comprehensive suite of benefits
  • Personal & Professional Development programs
  • Unconditional Inclusion in an inclusive environment
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Kiddom is redefining how technology powers learning. We combine world-class curr...
Location
Location
United States , San Francisco
Salary
Salary:
150000.00 - 220000.00 USD / Year
kiddom.co Logo
Kiddom
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience as a data engineer
  • 8+ years of software engineering experience (including data engineering)
  • Proven experience as a Data Engineer or in a similar role with strong data modeling, architecture, and design skills
  • Strong understanding of data engineering principles including infrastructure deployment, governance and security
  • Experience with MySQL, Snowflake, Cassandra and familiarity with Graph databases. (Neptune or Neo4J)
  • Proficiency in SQL, Python, (Golang)
  • Proficient with AWS offerings such as AWS Glue, EKS, ECS and Lambda
  • Excellent communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders
  • Strong understanding of PII compliance and best practices in data handling and storage
  • Strong problem-solving skills, with a knack for optimizing performance and ensuring data integrity and accuracy
Job Responsibility
Job Responsibility
  • Design, implement, and maintain the organization’s data infrastructure, ensuring it meets business requirements and technical standards
  • Deploy data pipelines to AWS infrastructure such as EKS, ECS, Lambdas and AWS Glue
  • Develop and deploy data pipelines to clean and transform data to support other engineering teams, analytics and AI applications
  • Extract and deploy reusable features to Feature stores such as Feast or equivalent
  • Evaluate and select appropriate database technologies, tools, and platforms, both on-premises and in the cloud
  • Monitor data systems and troubleshoot issues related to data quality, performance, and integrity
  • Work closely with other departments, including Product, Engineering, and Analytics, to understand and cater to their data needs
  • Define and document data workflows, pipelines, and transformation processes for clear understanding and knowledge sharing
What we offer
What we offer
  • Meaningful equity
  • Health insurance benefits: medical (various PPO/HMO/HSA plans), dental, vision, disability and life insurance
  • One Medical membership (in participating locations)
  • Flexible vacation time policy (subject to internal approval). Average use 4 weeks off per year
  • 10 paid sick days per year (pro rated depending on start date)
  • Paid holidays
  • Paid bereavement leave
  • Paid family leave after birth/adoption. Minimum of 16 paid weeks for birthing parents, 10 weeks for caretaker parents. Meant to supplement benefits offered by State
  • Commuter and FSA plans
  • Fulltime
Read More
Arrow Right

Data Engineer III

As a Data Engineer, you will play a key role in designing, developing, and maint...
Location
Location
India , Chennai
Salary
Salary:
Not provided
arcadia.com Logo
Arcadia
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years as a Data Engineer, data-adjacent Software Engineer, or a did-everything small data team member with a focus on building and maintaining data pipelines
  • Strong Python skills, especially in the context of data orchestration
  • Strong understanding of database management and design, including experience with Snowflake or an equivalent platform
  • Proficiency in SQL
  • Familiarity with data integration patterns, ETL/ELT processes, and data warehousing concepts
  • Experience with Argo, Prefect, Airflow, or similar data orchestration tools
  • Excellent problem-solving and analytical skills with a strong attention to detail
  • Ability to bring a customer-oriented and empathetic approach to understanding how data is used to drive the business
  • Strong communication skills
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable and efficient data pipelines in an AWS environment, centered on our Snowflake instance and using Fivetran, Prefect, Argo, and dbt
  • Collaborate with business analysts, analytics engineers, and software engineers to understand data requirements and deliver reliable solutions
  • Design, build and maintain tooling that enables users and services to interact with our data platform, including CI/CD pipelines for our data lakehouse, unit/integration/validation testing frameworks for our data pipelines, and command-line tools for ad-hoc data evaluation
  • Identify and implement best practices for data ingestion, transformation, and storage to ensure data integrity and accuracy
  • Optimize and tune data pipelines for improved performance, scalability, and reliability
  • Monitor data pipelines and proactively address any issues or bottlenecks to ensure uninterrupted data flow
  • Develop and maintain documentation for data pipelines, ensuring knowledge sharing and smooth onboarding of new team members
  • Implement data governance and security measures to ensure compliance with industry standards and regulations
  • Keep up to date with emerging technologies and trends in data engineering and recommend their adoption as appropriate
What we offer
What we offer
  • Competitive compensation based on market standards
  • Flexible Leave Policy
  • Office is in the heart of the city in case you need to step in for any purpose
  • Medical Insurance (1+5 Family Members)
  • We provide comprehensive coverage including accident policy and life Insurance
  • Annual performance cycle
  • Quarterly team engagement activities and rewards & recognitions
  • L&D programs to foster professional growth
  • A supportive engineering culture that values diversity, empathy, teamwork, trust, and efficiency
  • Fulltime
Read More
Arrow Right

Data Engineer

We are seeking our first Data Engineer, someone who can refine our data infrastr...
Location
Location
United States , New York City; San Francisco
Salary
Salary:
190000.00 - 250000.00 USD / Year
hebbia.ai Logo
Hebbia
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field
  • 5+ years software development experience at a venture-backed startup or top technology firm, with a focus on data engineering
  • Significant hands-on experience in data engineering (ETL development, data warehousing, data lake management, etc.)
  • Adept at identifying and owning data projects end to end, with the ability to work independently and exercise sound judgment
  • Proficient in Python and SQL
  • comfortable working with cloud-based data stack tools
  • Familiar with big data processing frameworks (e.g., Spark, Hadoop) and data integration technologies (e.g., Airflow, DBT, or similar)
  • Experience implementing data governance, security, and compliance measures
  • Strong collaboration and communication skills, with the ability to translate business requirements into technical solutions
  • You are comfortable working in-person 5 days a week
Job Responsibility
Job Responsibility
  • Architect, build, and maintain ETL pipelines and workflows that ensure high data quality and reliability
  • Design and manage a central data lake to consolidate data from various sources, enabling advanced analytics and reporting
  • Collaborate with cross-functional stakeholders (product, engineering, and business) to identify data gaps and develop effective solutions
  • Implement best practices in data security and governance to ensure compliance and trustworthiness
  • Evaluate and integrate new technologies, tools, and approaches to optimize data processes and architectures
  • Continuously monitor, troubleshoot, and improve data pipelines and infrastructure for performance, scalability, and cost-efficiency
What we offer
What we offer
  • PTO: Unlimited
  • Insurance: Medical + Dental + Vision + 401K + Wellness Benefits
  • Eats: Catered lunch daily + doordash dinner credit if you ever need to stay late
  • Parental leave policy: 3 months non-birthing parent, 4 months for birthing parent
  • Fertility benefits: $15k lifetime benefit
  • New hire equity grant: competitive equity package with unmatched upside potential
  • Fulltime
Read More
Arrow Right