CrawlJobs Logo

Staff Software Engineer I - Stream Governance

confluent.io Logo

Confluent

Location Icon

Location:
Canada

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

225100.00 - 264500.00 CAD / Year

Job Description:

As a Staff Software Engineer, you will be the technical lead for several key initiatives for the Stream Governance product at Confluent. The Stream Governance product portfolio is crucial to bringing Confluent’s mission to put event streaming at the heart of every organization and is a core component of the Data Streaming Platform. As part of our Stream Governance offering, you will be responsible for delivering critical functionalities such as Confluent Stream Catalog, Stream Sharing, Stream Lineage and Data Portal. These products enable our customers to search, organize, understand, share and access data in a self-service way. In this role, you will collaborate closely with the team and key stakeholders to design, architect, and develop cloud-native, multi-tenant services for Stream Governance. You will be responsible for guiding the vision, providing technical leadership, mentoring, and enabling a high-performing engineering team to tackle complex distributed data challenges at scale.

Job Responsibility:

  • Develop a cloud-native Stream Governance platform for Kafka and real-time data, which is a multi-tenant, highly available, and scalable service
  • Architect a complex engineering system from end to end
  • Partner across engineering and other key stakeholders to create and execute the overall roadmap for delivering a top-notch Data Streaming Platform for our customers
  • Evaluate and enhance the efficiency of our platform's technology stack, keeping pace with industry trends and adopting state-of-the-art solutions
  • Solve complicated technical projects with high quality and provide technical guidance to the team in specialized areas
  • As a vital member of our team, take responsibility for developing, managing, and maintaining a mission-critical service with a 99.99 SLA running on 90+ AWS, GCP, and Azure regions
  • Enhance the stability, performance, scalability, and operational excellence across multiple critical systems

Requirements:

  • 10+ years of relevant software development experience
  • Technical expertise in large-scale systems engineering or distributed systems
  • 5+ years of experience with designing, building, and scaling distributed systems
  • Experience running production services in the cloud and being part of oncall rotation
  • Expertise in cloud-native technology, including networking & security
  • Prior experience working on AWS, GCP, or Azure and a deep understanding of cloud best practices
  • Ability to influence the team, peers, and management using effective communication and collaborative techniques
  • Proven experience in leading and mentoring technical teams
  • Drive results by removing friction and improving the development stack with urgency and prioritization skills
  • BS Degree in Computer Science, Engineering, or equivalent experience. An advanced degree in computer science is preferred
What we offer:
  • Remote-First Work
  • Robust Insurance Benefits
  • Flexible Time Away
  • The Best Teammates
  • Experience Ambassadors
  • Open and Honest Culture
  • Well-Being and Growth
  • Offers Equity

Additional Information:

Job Posted:
January 01, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Staff Software Engineer I - Stream Governance

Member of Technical Staff - Data Engineer

As Microsoft continues to push the boundaries of AI, we are on the lookout for i...
Location
Location
United States , New York
Salary
Salary:
139900.00 - 274800.00 USD / Year
https://www.microsoft.com/ Logo
Microsoft Corporation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work
  • OR Master's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work
  • OR equivalent experience
  • 4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL
  • Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc
  • 3+ years experience with data governance, data compliance and/or data security
  • 2+ years' experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP. Extensive use datastores like RDBMS, key-value stores, etc
  • 2+ years' experience building distributed systems at scale and extensive systems knowledge that spans bare-metal hosts to containers to networking
  • Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience
  • Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security
Job Responsibility
Job Responsibility
  • Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases
  • Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services
  • Ship high-quality, well-tested, secure, and maintainable code
  • Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively
  • Enjoy working in a fast-paced, design-driven, product development cycle
  • Embody our Culture and Values
  • Fulltime
Read More
Arrow Right

Staff Software Engineer, Backend Platform

At Harvey, we’re transforming how legal and professional services operate — not ...
Location
Location
United States , San Francisco
Salary
Salary:
238000.00 - 290000.00 USD / Year
harvey.ai Logo
Harvey
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of software engineering experience (post-BS/MS), including building scalable backend systems or internal developer platforms
  • Proficiency in Python (or similar languages) and deep knowledge of backend development fundamentals (APIs, data stores, concurrency, distributed systems)
  • Hands-on experience with web frameworks and service architectures (e.g. Flask/FastAPI, Bounded context services, microservices) and an understanding of designing clean, versioned APIs
  • Familiarity with caching, messaging, and database technologies (Redis, Kafka, SQL/NoSQL databases, Vector databases, etc.) and how to use them effectively for high performance and reliability
  • A track record of writing high-quality, well-tested code and using tools (unit/integration testing, static typing, CI) to catch issues early and ensure reliability
  • Strong problem-solving skills and a passion for improving developer experience — you enjoy creating tools or frameworks that make other engineers more productive
  • Excellent collaboration and communication skills, with the ability to work across teams and incorporate feedback
Job Responsibility
Job Responsibility
  • Develop and maintain Harvey’s internal backend frameworks and libraries that provide common capabilities (API routing, service lifecycle management, caching and messaging primitives, error handling interfaces, etc.), so product teams don’t have to reinvent them
  • Create and improve APIs, service templates, and versioned interfaces that establish consistent patterns for building new services and features
  • Introduce and champion modern backend architecture patterns like asynchronous I/O (asyncio) and streaming data processing, continually evolving our platform for better performance and scalability
  • Design Harvey-specific abstractions and domain-specific frameworks—covering cross-cutting concerns (e.g., authorization, streaming) and areas like data governance and event processing—to provide product engineers with these capabilities out of the box
  • Embed reliability and observability into the platform by building in tracing, metrics, and automated tests (shift-left), ensuring services built on our foundation are robust and easy to monitor
  • Collaborate with Model Infrastructure team to tackle challenges unique to GenAI-native applications — such as supporting high-throughput model inference, managing streaming and long-running API interactions, and designing abstractions for retrieval, context handling, and prompt lifecycle
  • Collaborate with the Developer Experience and Infrastructure teams (who own CI/CD pipelines, build tools, and release infrastructure) to integrate our platform components seamlessly into the deployment and monitoring ecosystem
  • Work closely with product engineering teams to gather feedback, evangelize best practices, and make the “paved road” approach a reality — providing strong defaults and clear documentation so teams can move fast with confidence
What we offer
What we offer
  • Offers Equity
  • Offers Bonus
  • Comprehensive health, dental and vision coverage
  • retirement benefits (401k match up to 4%)
  • flexible PTO
  • Fulltime
Read More
Arrow Right

Member of Technical Staff - Data Platform

If you are excited by the challenge of designing distributed systems that proces...
Location
Location
United States , Mountain View; Redmond
Salary
Salary:
119800.00 - 234700.00 USD / Year
https://www.microsoft.com/ Logo
Microsoft Corporation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 3+ years experience in business analytics, data science, software development, data modeling, or data engineering OR Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, data modeling, or data engineering OR equivalent experience
  • Proficiency in Python, Scala, Java, or Go
  • Deep Distributed Systems Knowledge: Demonstrated technical understanding of massive-scale compute engines (e.g., Apache Spark, Flink, Ray, Trino, or Snowflake)
  • Experience architecting Lakehouse environments at scale (using Delta Lake, Iceberg, or Hudi)
  • Experience building internal developer platforms or "Data-as-a-Service" APIs
  • Strong background in streaming technologies (Kafka, Azure EventHubs, Pulsar) and stateful stream processing
  • Experience with container orchestration (Kubernetes) for deploying data applications
  • Experience enabling AI/ML workloads (Feature Stores, Vector Databases)
Job Responsibility
Job Responsibility
  • Core Platform Engineering: Design and build the underlying frameworks (based on Spark/Databricks) that allow internal teams to process massive datasets efficiently
  • Distributed Systems Architecture: Modernize our data stack by moving from batch-heavy patterns to event-driven architectures
  • Unstructured AI Data Pipelines: Architect high-throughput pipelines capable of processing complex, non-tabular data (documents, code repositories, chat logs) for LLM pre-training, fine-tuning and evaluations datasets
  • AI Feedback Loops: Engineer the high-throughput telemetry systems that capture user interactions with Copilot
  • Infrastructure as Code: Treat the data platform as software. Define and deploy all storage, compute, and networking resources using IaC (Bicep/Terraform)
  • Data Reliability Engineering: Move beyond simple "validation checks" to build automated governance and observability systems
  • Compute Optimization: Deep-dive into query execution plans and cluster performance. Optimize shuffle operations, partition strategies, and resource allocation
  • Fulltime
Read More
Arrow Right

Staff Software Engineer, Data Infrastructure

At Docker, we make app development easier so developers can focus on what matter...
Location
Location
United States , Seattle
Salary
Salary:
195400.00 - 275550.00 USD / Year
docker.com Logo
Docker
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of software engineering experience with 3+ years focused on data engineering and analytics systems
  • Expert-level experience with Snowflake including advanced SQL, performance optimization, and cost management
  • Deep proficiency in DBT for data modeling, transformation, and testing with experience in large-scale implementations
  • Strong expertise with Apache Airflow for complex workflow orchestration and pipeline management
  • Hands-on experience with Sigma or similar modern BI platforms for self-service analytics
  • Extensive AWS experience including data services (S3, Redshift, EMR, Glue, Lambda, Kinesis) and infrastructure management
  • Proficiency in Python, SQL, and other programming languages commonly used in data engineering
  • Experience with infrastructure-as-code, CI/CD practices, and modern DevOps tools
  • Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience
  • Proven track record designing and implementing large-scale distributed data systems
Job Responsibility
Job Responsibility
  • Define and drive the technical strategy for Docker's data platform architecture, establishing long-term vision for scalable data systems
  • Lead design and implementation of highly scalable data infrastructure leveraging Snowflake, AWS, Airflow, DBT, and Sigma
  • Architect end-to-end data pipelines supporting real-time and batch analytics across Docker's product ecosystem
  • Drive technical decision-making around data platform technologies, architectural patterns, and engineering best practices
  • Establish technical standards for data quality, testing, monitoring, and operational excellence
  • Design and build robust, scalable data systems that process petabytes of data and support millions of user interactions
  • Implement complex data transformations and modeling using DBT for analytics and business intelligence use cases
  • Develop and maintain sophisticated data orchestration workflows using Apache Airflow
  • Optimize Snowflake performance and cost efficiency while ensuring reliability and scalability
  • Build data APIs and services that enable self-service analytics and integration with downstream systems
What we offer
What we offer
  • Freedom & flexibility
  • fit your work around your life
  • Designated quarterly Whaleness Days plus end of year Whaleness break
  • Home office setup
  • we want you comfortable while you work
  • 16 weeks of paid Parental leave
  • Technology stipend equivalent to $100 net/month
  • PTO plan that encourages you to take time to do the things you enjoy
  • Training stipend for conferences, courses and classes
  • Equity
  • Fulltime
Read More
Arrow Right

Principal Software Engineer (Backend)

Palo Alto Networks' ADEM (Autonomous Digital Experience Management) group is see...
Location
Location
United States , Santa Clara
Salary
Salary:
Not provided
paloaltonetworks.com Logo
Palo Alto Networks
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12+ years of software engineering experience, with a significant portion dedicated to designing and operating large-scale distributed systems in a cloud-native environment
  • Advanced AI-Augmented Development: expert in leveraging AI-powered development tools—including Claude Code, Cursor, Windsurf, and GitHub Copilot—to radically accelerate the SDLC and automate complex refactoring and testing workflows
  • Distributed Systems Mastery: Proven track record of architecting systems that handle billions of events per day with strict sub-second latency requirements using Rust, Go (Golang), Java, or Python
  • GCP Principal-Level Expertise: Deep authoritative knowledge of the GCP ecosystem (GKE, Spanner, BigQuery, Pub/Sub, Dataflow) and the ability to optimize cloud spend through sophisticated architectural choices (FinOps)
  • Data Plane Innovation: Experience building high-throughput, low-latency data pipelines using technologies like Kafka, Pulsar, or Flink
  • Security & Networking Visionary: Deep understanding of Zero Trust architecture, L4-L7 networking, and advanced encryption standards
  • Open Source & Community: A history of contributing to open-source projects (e.g., Kubernetes, Prometheus, Istio) or speaking at industry conferences is highly desirable
  • Education: BS/MS/PhD in Computer Science or a related technical field, or equivalent high-level professional experience
Job Responsibility
Job Responsibility
  • Technical Strategy & Roadmap: Define the long-term architectural vision for ADEM backend services, ensuring scalability to support hundreds of millions of global endpoints and multi-petabyte telemetry streams
  • Architectural Governance: Lead the "Design Review Board" for the ADEM org, ensuring that all new services adhere to Secure AI by Design, high-availability patterns, and cost-efficient GCP utilization
  • AI/ML Integration at Scale: Drive the transition from traditional analytics to Agentic AI workflows, overseeing the backend orchestration required to power LLM-driven autonomous remediation
  • Cross-Functional Leadership: Partner with Product Management, Data Science, and DevOps to translate ambiguous business requirements into robust, high-performance technical specifications
  • Engineering Excellence & Mentorship: Act as a force multiplier by mentoring Staff and Senior engineers, fostering a culture of rigorous testing, high code quality, and proactive technical debt management
  • Crisis Leadership: Serve as the ultimate technical escalation point for complex, systemic production issues, leading post-mortems that drive permanent architectural improvements
What we offer
What we offer
  • compensation may also include restricted stock units and a bonus
  • Fulltime
Read More
Arrow Right

Senior/Staff Software Engineer - Data Platform

Perplexity is looking for experienced Data Platform Engineers to design, build, ...
Location
Location
United States , San Francisco, Seattle, New York City
Salary
Salary:
250000.00 - 385000.00 USD / Year
perplexity.ai Logo
Perplexity
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years (Senior) or 8+ years (Staff) of software engineering experience
  • Strong experience building production data infrastructure systems
  • Hands-on experience with batch and/or streaming data processing at scale
  • Deep familiarity with data orchestration systems (Airflow, Dagster, or similar)
  • Proficiency in Python and at least one additional backend language (Go, TypeScript, etc.)
  • Strong systems thinking: you understand tradeoffs across reliability, latency, cost, and complexity
  • Experience supporting ML/AI workflows, training pipelines, or evaluation systems
  • Familiarity with data quality, lineage, observability, and governance tooling
  • Prior ownership of internal platforms used by many teams
Job Responsibility
Job Responsibility
  • Design and operate large-scale batch and streaming data pipelines supporting product features, AI training/evaluation, analytics, and experimentation
  • Build and evolve event-driven and streaming systems (e.g., Kafka/Kinesis/PubSub-style architectures) for real-time ingestion, transformation, and delivery
  • Own batch processing frameworks for backfills, aggregations, and offline computation
  • Lead the design and operation of data orchestration systems (e.g., Airflow, Dagster, or equivalent), including scheduling, dependency management, retries, SLAs, and observability
  • Establish strong guarantees around data correctness, freshness, lineage, and recoverability
  • Design systems that handle scale, partial failure, and evolving schemas
  • Build self-serve data platforms that empower engineers, data scientists, and analysts to safely create and operate pipelines
  • Improve developer experience for data work through better abstractions, tooling, documentation, and paved paths
  • Set standards for data modeling, testing, validation, and deployment
  • Drive architectural decisions across data infrastructure for storage, compute, orchestration, and APIs
What we offer
What we offer
  • Equity
  • Health
  • Dental
  • Vision
  • Retirement
  • Fitness
  • Commuter and dependent care accounts
  • Fulltime
Read More
Arrow Right

Lead Software Engineer - AI Engineering

Join RTB House and lead our AI Engineering Lab, a team dedicated to pioneering i...
Location
Location
Poland
Salary
Salary:
Not provided
rtbhouse.com Logo
RTB House
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 6 years of professional experience in Software Engineering, with a strong background in building and deploying complex, large-scale systems
  • Distributed Systems Expertise: Proven, hands-on experience designing, developing, and operating distributed systems at scale (e.g., microservices, event-driven architectures, stream processing)
  • Programming Languages: Proficiency in at least two programming languages, with Python being mandatory
  • AI/ML Engineering: Basic understanding of the Machine Learning lifecycle, MLOps practices, and experience in integrating ML models (especially LLMs) into production applications
  • Technical Leadership: Demonstrated experience in technical leadership, including defining technical roadmaps, mentoring junior engineers, leading code reviews, and driving architectural decisions
  • Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field.
Job Responsibility
Job Responsibility
  • Lead, mentor, and grow a team of talented Frontend/Full Stack and Backend engineers, fostering a culture of technical excellence and high code quality
  • Serve as a Full Stack tech-leader (often hands-on), contributing to the design and development of key architectures and full stack solutions that support various platforms (Web, Mobile, CTV)
  • Define and execute the team's charter, focusing on end-to-end customer interactions and the reliable display of ads globally
  • Develop and oversee state-of-the-art observability systems for the Ad Display platform, tracking crucial metrics like reliability, viewability, latency, and providing deep debugging insights for ad creation teams
  • Provide governance for cross-team ad rollout, including defining best practices and tooling for rigorous testing and deployment strategies (A/B testing, Canary deployments)
  • Lead complex technical projects at massive scale, ensuring our solutions can handle millions of requests and maintain high performance worldwide
  • Collaborate intensely with a Staff Frontend Engineer, stakeholders from Ads layouts creation teams (designers, graphic specialists), and the core Bidding Platform backend teams.
What we offer
What we offer
  • Projects focused on high code quality – solid code reviews are our standard
  • Collaboration within an interdisciplinary, self-sufficient team including: DevOps (ensuring a great Developer Experience), database experts, backend developers, product designers, and QA engineers
  • Hardware and software tailored to your preferences – e.g. MacBook, AI tool licenses
  • Access to modern technologies and the opportunity to apply them in large-scale, high-impact projects
  • Flexible working conditions – no core hours, fully remote cooperation.
Read More
Arrow Right

Lead Software Engineer - AI Engineering

Join RTB House and lead our AI Engineering Lab, a team dedicated to pioneering i...
Location
Location
Poland
Salary
Salary:
Not provided
rtbhouse.com Logo
RTB House
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 6 years of professional experience in Software Engineering, with a strong background in building and deploying complex, large-scale systems
  • Distributed Systems Expertise: Proven, hands-on experience designing, developing, and operating distributed systems at scale (e.g., microservices, event-driven architectures, stream processing)
  • Programming Languages: Proficiency in at least two programming languages, with Python being mandatory
  • AI/ML Engineering: Basic understanding of the Machine Learning lifecycle, MLOps practices, and experience in integrating ML models (especially LLMs) into production applications
  • Technical Leadership: Demonstrated experience in technical leadership, including defining technical roadmaps, mentoring junior engineers, leading code reviews, and driving architectural decisions
  • Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field.
Job Responsibility
Job Responsibility
  • Lead, mentor, and grow a team of talented Frontend/Full Stack and Backend engineers, fostering a culture of technical excellence and high code quality
  • Serve as a Full Stack tech-leader (often hands-on), contributing to the design and development of key architectures and full stack solutions that support various platforms (Web, Mobile, CTV)
  • Define and execute the team's charter, focusing on end-to-end customer interactions and the reliable display of ads globally
  • Develop and oversee state-of-the-art observability systems for the Ad Display platform, tracking crucial metrics like reliability, viewability, latency, and providing deep debugging insights for ad creation teams
  • Provide governance for cross-team ad rollout, including defining best practices and tooling for rigorous testing and deployment strategies (A/B testing, Canary deployments)
  • Lead complex technical projects at massive scale, ensuring our solutions can handle millions of requests and maintain high performance worldwide
  • Collaborate intensely with a Staff Frontend Engineer, stakeholders from Ads layouts creation teams (designers, graphic specialists), and the core Bidding Platform backend teams.
What we offer
What we offer
  • Projects focused on high code quality – solid code reviews are our standard
  • Collaboration within an interdisciplinary, self-sufficient team including: DevOps (ensuring a great Developer Experience), database experts, backend developers, product designers, and QA engineers
  • Hardware and software tailored to your preferences – e.g. MacBook, AI tool licenses
  • Access to modern technologies and the opportunity to apply them in large-scale, high-impact projects
  • Flexible working conditions – no core hours, fully remote cooperation.
Read More
Arrow Right