CrawlJobs Logo

Staff Product Manager, Real Time Data Analytics Platform

confluent.io Logo

Confluent

Location Icon

Location:
United States

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

231500.00 - 272000.00 USD / Year

Job Description:

We're not just building better tech. We're rewriting how data moves and what the world can do with it. With Confluent, data doesn't sit still. Our platform puts information in motion, streaming in near real-time so companies can react faster, build smarter, and deliver experiences as dynamic as the world around them. It takes a certain kind of person to join this team. Those who ask hard questions, give honest feedback, and show up for each other. No egos, no solo acts. Just smart, curious humans pushing toward something bigger, together. One Confluent. One Team. One Data Streaming Platform. About the Role: Today, enterprises stitch together separate systems for streaming, analytics, and serving. Each of these systems come with its own latency, cost, and operational overhead. The next evolution of Confluent's platform collapses that complexity by continuously materializing streaming data into fast, queryable state that serves real-time analytics, operational applications, and downstream intelligent systems from a single foundation. As a Staff Product Manager, you'll own the vision, strategy, and roadmap for the core streaming analytics engine underlying Confluent's real-time data platform. This system sits at the intersection of stream processing, real-time analytical databases, and modern serving infrastructure, continuously transforming data in motion into low-latency, queryable materialized views that power AI agents, dashboards, operational queries, and event-driven applications. You'll own the engine: the storage, materialization, query layer, and performance, while partnering with peer product managers who own the downstream product experiences built on top of it. This is a foundational, high-leverage role. You'll define a new product category that brings together concepts from streaming analytics, real-time OLAP, materialized views, and modern database systems into a cloud-native, streaming-first architecture. You'll partner directly with engineering, design, and senior leadership to bring this to market and position Confluent as a compelling alternative to standalone real-time analytical databases.

Job Responsibility:

  • Own the product strategy for Confluent's streaming analytics engine
  • Develop competitive strategy against established real-time analytical databases
  • Drive key architectural and product tradeoffs
  • Partner deeply with engineering
  • Collaborate with peer product managers
  • Work with customers and design partners
  • Collaborate with Confluent's broader product and platform teams
  • Define success metrics, cost models, and pricing strategy
  • Influence Confluent's company-wide platform strategy

Requirements:

  • 8+ years of experience in product management or related technical roles
  • 5+ years of experience taking technical infrastructure or data products from conception to launch
  • Deep familiarity with real-time analytics, database internals, or streaming data platform architecture
  • Experience defining and launching products in the data infrastructure, analytics database, or stream processing space
  • Strong technical judgment
  • Track record of leading cross-functional initiatives
  • Bachelor's degree or equivalent practical experience

Nice to have:

  • Engineering background in analytical databases, stream processing, or distributed storage systems
  • Experience with real-time OLAP engines (e.g., ClickHouse, Apache Druid, StarRocks, Apache Pinot)
  • Familiarity with Kafka, Flink, or similar streaming infrastructure
  • Experience building products that serve AI/ML workloads
  • Prior experience with cloud-native infrastructure products and consumption-based pricing models
  • Experience working on open source data infrastructure projects
What we offer:
  • Remote-First Work
  • Robust Insurance Benefits
  • Flexible Time Away
  • The Best Teammates
  • Experience Ambassadors
  • Open and Honest Culture
  • Well-Being and Growth
  • Equity

Additional Information:

Job Posted:
May 15, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Staff Product Manager, Real Time Data Analytics Platform

Staff Data Engineer

We are seeking a Staff Data Engineer to architect and lead our entire data infra...
Location
Location
United States , New York; San Francisco
Salary
Salary:
170000.00 - 210000.00 USD / Year
taskrabbit.com Logo
Taskrabbit
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7-10 years of experience in Data Engineering
  • Expertise in building and maintaining ELT data pipelines using modern tools such as dbt, Airflow, and Fivetran
  • Deep experience with cloud data warehouses such as Snowflake, BigQuery, or Redshift
  • Strong data modeling skills (e.g., dimensional modeling, star/snowflake schemas) to support both operational and analytical workloads
  • Proficient in SQL and at least one general-purpose programming language (e.g., Python, Java, or Scala)
  • Experience with streaming data platforms (e.g., Kafka, Kinesis, or equivalent) and real-time data processing patterns
  • Familiarity with infrastructure-as-code tools like Terraform and DevOps practices for managing data platform components
  • Hands-on experience with BI and semantic layer tools such as Looker, Mode, Tableau, or equivalent
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable, reliable data pipelines and infrastructure to support analytics, operations, and product use cases
  • Develop and evolve dbt models, semantic layers, and data marts that enable trustworthy, self-serve analytics across the business
  • Collaborate with non-technical stakeholders to deeply understand their business needs and translate them into well-defined metrics and analytical tools
  • Lead architectural decisions for our data platform, ensuring it is performant, maintainable, and aligned with future growth
  • Build and maintain data orchestration and transformation workflows using tools like Airflow, dbt, and Snowflake (or equivalent)
  • Champion data quality, documentation, and observability to ensure high trust in data across the organization
  • Mentor and guide other engineers and analysts, promoting best practices in both data engineering and analytics engineering disciplines
What we offer
What we offer
  • Employer-paid health insurance
  • 401k match with immediate vesting
  • Generous and flexible time off with 2 company-wide closure weeks
  • Taskrabbit product stipends
  • Wellness + productivity + education stipends
  • IKEA discounts
  • Reproductive health support
  • Fulltime
Read More
Arrow Right

Staff Product Manager — Teradata Operational & Real Time Engine

The Staff Product Manager for the Operational & Real-Time Engine owns the end-to...
Location
Location
India , Bengaluru; Hyderabad; Gurugram
Salary
Salary:
Not provided
teradata.com Logo
Teradata
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of product management experience, with demonstrated track record of shipping complex technical products and building successful businesses
  • Strong understanding of the OLTP and real-time serving market. Experience with low-latency database architectures, operational data stores, or real-time analytics systems
  • Deep technical ability — you can read an architecture diagram, review a query plan, reason about caching layers, and hold your own with database engineers
  • Demonstrated drive to ship in close collaboration across engineering, design, sales, and marketing
  • Experience with zero-to-one or early-stage product development within a larger company
  • Excellent communication skills — you can present to a CTO and write a PRD with equal clarity
Job Responsibility
Job Responsibility
  • Own the product strategy for Teradata’s operational engine, including market positioning, customer segmentation, competitive differentiation, and strategic choices
  • Define the architecture: how the real-time engine integrates with VantageCloud — the serving layer, caching tier, CDC pipelines, shared governance, and unified metadata
  • Develop a clear point of view on where the OLTP + analytics boundary is heading and how Teradata wins in a world where AI agents need sub-10ms data access and applications demand operational reads on analytical data
  • Discover and synthesize unmet customer needs across Teradata’s Fortune 500 customer base — particularly around real-time serving, GenAI retrieval, feature stores, and application development on analytics data
  • Translate those needs into clear product requirements and specifications
  • Validate with customers early and often. Run pilots. Kill bad ideas fast
  • Develop and own the product roadmap. Execute in close collaboration with Engineering, working across Teradata’s core platform team and the real-time engine team
  • Engage with senior engineers on deep technical topics — low-latency query paths, connection pooling, vector search integration, MCP support, caching architecture, CDC and change propagation — and guide junior engineers on product context
  • Ship with appropriate enterprise safeguards: role-based access, audit logging, encryption, and compliance controls that Teradata customers expect
  • Develop GTM strategy in close collaboration with Sales and Marketing
What we offer
What we offer
  • We prioritize a people-first culture because we know our people are at the very heart of our success
  • We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work
  • We focus on well-being because we care about our people and their ability to thrive both personally and professionally
  • We are committed to actively working to foster an inclusive environment that celebrates people for all of who they are
  • Fulltime
Read More
Arrow Right

Software Engineer Sr Staff - Platforms Developer

Designs, develops, troubleshoots and debugs software programs for software enhan...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, electronics, telecommunication engineering, or a related discipline
  • 14 to 19 years of experience in networking and system software development
  • Proficiency in C and C++ programming
  • Familiarity with data structures and system debugging techniques
  • Expertise in Host Complex, System Peripherals & Drivers: CPU complex (x86)
  • PCIe, SPI, I2C, MDIO
  • FPGA, CPLD, Flash Drivers
  • Expertise in Ethernet Interfaces (ranging from 1Gig to 400G+, including 800G, 1.6T), MacSec, Timing, Optics (SFP, QSFP, QDD, OSFP)
  • Expertise in High-speed packet forwarding with network processors, PHYs, and SerDes
  • Cloud Architectures
Job Responsibility
Job Responsibility
  • Collaborate with product managers, architects, and other engineers to define software requirements and specifications
  • Design, implement, and maintain networking and system software components using C and C++ programming languages
  • Conduct object-oriented analysis and design to ensure robust and scalable solutions
  • Debug complex system-level issues, leveraging your deep understanding of fundamental OS concepts (especially in Linux or similar operating systems)
  • Participate in hardware and system-level design discussions, ensuring carrier-class software development
  • Work with Linux device drivers, system bring-up, and the Linux kernel
  • Navigate large codebases effectively
  • Apply strong technical, analytical, and problem-solving skills to enhance software performance and resilience
  • Utilize scripting technologies and modern DevOps practices
  • Collaborate with cross-functional teams, including networking, embedded platform software, and hardware experts
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Staff Software Engineer, Data Infrastructure

At Docker, we make app development easier so developers can focus on what matter...
Location
Location
United States , Seattle
Salary
Salary:
195400.00 - 275550.00 USD / Year
docker.com Logo
Docker
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of software engineering experience with 3+ years focused on data engineering and analytics systems
  • Expert-level experience with Snowflake including advanced SQL, performance optimization, and cost management
  • Deep proficiency in DBT for data modeling, transformation, and testing with experience in large-scale implementations
  • Strong expertise with Apache Airflow for complex workflow orchestration and pipeline management
  • Hands-on experience with Sigma or similar modern BI platforms for self-service analytics
  • Extensive AWS experience including data services (S3, Redshift, EMR, Glue, Lambda, Kinesis) and infrastructure management
  • Proficiency in Python, SQL, and other programming languages commonly used in data engineering
  • Experience with infrastructure-as-code, CI/CD practices, and modern DevOps tools
  • Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience
  • Proven track record designing and implementing large-scale distributed data systems
Job Responsibility
Job Responsibility
  • Define and drive the technical strategy for Docker's data platform architecture, establishing long-term vision for scalable data systems
  • Lead design and implementation of highly scalable data infrastructure leveraging Snowflake, AWS, Airflow, DBT, and Sigma
  • Architect end-to-end data pipelines supporting real-time and batch analytics across Docker's product ecosystem
  • Drive technical decision-making around data platform technologies, architectural patterns, and engineering best practices
  • Establish technical standards for data quality, testing, monitoring, and operational excellence
  • Design and build robust, scalable data systems that process petabytes of data and support millions of user interactions
  • Implement complex data transformations and modeling using DBT for analytics and business intelligence use cases
  • Develop and maintain sophisticated data orchestration workflows using Apache Airflow
  • Optimize Snowflake performance and cost efficiency while ensuring reliability and scalability
  • Build data APIs and services that enable self-service analytics and integration with downstream systems
What we offer
What we offer
  • Freedom & flexibility
  • fit your work around your life
  • Designated quarterly Whaleness Days plus end of year Whaleness break
  • Home office setup
  • we want you comfortable while you work
  • 16 weeks of paid Parental leave
  • Technology stipend equivalent to $100 net/month
  • PTO plan that encourages you to take time to do the things you enjoy
  • Training stipend for conferences, courses and classes
  • Equity
  • Fulltime
Read More
Arrow Right

Staff Software Engineer - Backend

As the Staff Software Engineer for our SaaS platform team, you will be crucial i...
Location
Location
United States , Mountain View
Salary
Salary:
198000.00 - 225000.00 USD / Year
cyngn.com Logo
Cyngn
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of software development experience, with a strong focus on backend systems and distributed architectures
  • Extensive experience in building and scaling cloud-native SaaS platforms, preferably in the IoT or robotics domains
  • Expert-level proficiency in at least one of Python, Go, Java, or C++, with working knowledge of others
  • Deep understanding of cloud technologies and services (AWS, Azure, or GCP)
  • Proven experience with event-driven architectures and message queuing systems (e.g., Kafka, RabbitMQ, Apache Pulsar)
  • Strong background in database design and optimization, including both SQL and NoSQL solutions
  • Proficiency in developing scalable WebSocket-based real-time communication systems
  • Expertise in developing real-time data processing pipelines and analytics systems
  • Proficiency with containerization and orchestration technologies (Docker, Kubernetes)
  • Experience with infrastructure-as-code and CI/CD practices (e.g., Terraform, GitOps)
Job Responsibility
Job Responsibility
  • Architect and lead the development of a sophisticated, cloud-native fleet management system capable of real-time control and monitoring of numerous autonomous vehicles
  • Design and implement scalable, distributed systems that can handle high-volume, real-time data processing and decision-making
  • Develop robust APIs and microservices to support integration with various autonomous vehicle platforms and customer systems
  • Create efficient algorithms for route optimization, task scheduling, and resource allocation across vehicle fleets
  • Implement advanced data analytics and machine learning capabilities to provide predictive maintenance, performance optimization, and business intelligence features
  • Ensure system reliability, security, and compliance with industry standards and regulations
  • Lead a team of skilled engineers, fostering a culture of innovation, code quality, and continuous improvement
  • Collaborate with product managers, UX designers, and customers to translate business requirements into technical solutions
  • Mentor junior developers and contribute to the technical growth of the engineering team
  • Participate in the entire software development lifecycle, from concept and design to testing, deployment, and maintenance
What we offer
What we offer
  • Health benefits (Medical, Dental, Vision, HSA and FSA (Health & Dependent Daycare), Employee Assistance Program, 1:1 Health Concierge)
  • Life, Short-term, and long-term disability insurance (Cyngn funds 100% of premiums)
  • Company 401(k)
  • Commuter Benefits
  • Flexible vacation policy
  • Remote or hybrid work opportunities
  • Sabbatical leave opportunity after five years with the company
  • Paid Parental Leave
  • Daily lunches for in-office employees
  • Monthly meal and tech allowances for remote employees
  • Fulltime
Read More
Arrow Right

Senior Staff Engineer - Availability and Incident Management

GEICO is seeking an experienced Engineer with a passion for building high-perfor...
Location
Location
United States , Chevy Chase; Austin; Palo Alto; Richardson; Chicago
Salary
Salary:
110000.00 - 260000.00 USD / Year
geico.com Logo
Geico
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience building automation platforms and self-service tools for workflow management, analytics, or engineering productivity
  • Fluency in at least two modern languages such as Python, Go, Java, C++, or C# including object-oriented design
  • Experience building microservices architectures, REST APIs, and distributed systems
  • Experience with data pipelines, analytics platforms, and visualization tools for operational metrics and KPIs
  • Experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra, CosmosDB) for data storage and analytics
  • Experience with observability platforms (Prometheus, Grafana, Datadog, Splunk, ELK) and distributed systems monitoring, logging, and tracing
  • Experience with cloud providers (Azure, AWS, or GCP) and cloud-native architectures
  • Experience with CI/CD pipelines, infrastructure as code, and container orchestration (Kubernetes, Docker)
  • Experience writing workflow automation code (YAML pipelines, GitHub Actions, Azure DevOps pipelines)
  • Strong understanding of distributed systems architecture, design patterns, reliability, and scaling
Job Responsibility
Job Responsibility
  • Lead the strategy and execution for incident retrospective and correction of error (COE) processes across the engineering organization
  • Help conduct deep technical root cause analysis and incident forensics across distributed systems using observability data, logs, metrics, and traces
  • Establish continuous improvement loops through automated trend analysis, pattern recognition algorithms, and predictive analytics
  • Design, code, and deploy automation platforms and self-service tools using Python, Go, Java, or C# that scale incident retrospective workflows and eliminate manual tracking
  • Build production-grade data pipelines, analytics systems, and real-time dashboards to measure incident trends, COE effectiveness, and action item completion rates
  • Write code for workflow automation, integrations with observability platforms, and APIs that connect incident management tools across the engineering ecosystem
  • Leverage SQL and NoSQL databases to store, query, and analyze incident data at scale using Azure tools and cloud-native services
  • Develop and maintain systems that ensure rigorous follow-through on action items, remediation plans, and preventive measures with automated tracking
  • Partner with service engineering teams to implement preventive measures and architectural improvements based on incident patterns
  • Present data-driven insights and incident trend analysis to leadership and engineering teams to drive preventive action
What we offer
What we offer
  • Comprehensive Total Rewards program that offers personalized coverage tailor-made for you and your family’s overall well-being
  • Financial benefits including market-competitive compensation
  • a 401K savings plan vested from day one that offers a 6% match
  • performance and recognition-based incentives
  • and tuition assistance
  • Access to additional benefits like mental healthcare as well as fertility and adoption assistance
  • Supports flexibility- We provide workplace flexibility as well as our GEICO Flex program, which offers the ability to work from anywhere in the US for up to four weeks per year
  • Fulltime
Read More
Arrow Right

Senior Principal Software Engineer, Infrastructure

At Docker, we make app development easier so developers can focus on what matter...
Location
Location
United States , Seattle
Salary
Salary:
251000.00 - 352000.00 USD / Year
docker.com Logo
Docker
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12+ years of software engineering experience with demonstrated expertise across multiple platform domains (identity, billing, data, infrastructure)
  • Proven track record architecting and delivering large-scale distributed systems serving millions of users and thousands of enterprise customers
  • Deep expertise in at least two of: identity/access management systems, billing/monetization platforms, data platforms, or cloud infrastructure
  • Broad working knowledge across all platform domains with ability to make sound architectural decisions spanning multiple areas
  • Expert-level understanding of API design, service architecture, and system integration patterns at scale
  • Experience with cloud platforms (AWS, GCP, or Azure) and modern infrastructure patterns (Kubernetes, service mesh, infrastructure-as-code)
  • Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience
  • Track record of establishing strategic technical plans that directly enabled business outcomes (revenue growth, cost reduction, market expansion)
  • Experience translating business strategy into technical architecture and roadmaps
  • Demonstrated ability to identify and prioritize investments that provide maximum platform leverage
Job Responsibility
Job Responsibility
  • Define and own the multi-year technical vision for Docker's foundational platform, encompassing accounts, billing, data, enterprise governance, and infrastructure
  • Establish strategic plans and objectives for major platform initiatives, making architectural decisions that ensure effective achievement of Docker's business objectives
  • Contribute to and drive the strategic vision in collaboration with the VP of Engineering, translating organizational strategy into technical roadmaps that span multiple teams and years
  • Identify and prioritize platform investments that provide maximum leverage—capabilities built once that enable rapid iteration across all Docker products
  • Develop architectural principles and standards that guide technical decisions across the Bridge organization and influence product engineering teams
  • Anticipate future business needs and ensure platform architecture provides the flexibility to support Docker's evolving commercial models
  • Lead large cross-company programs that require coordination across Desktop, Hub, AI, Security, Cloud, and Platform teams
  • Architect the unified platform interfaces ("Control Planes") that enable product teams to answer canonical questions like "Can this user access this feature?" or "How much has this organization consumed?" without understanding underlying complexity
  • Drive convergence of fragmented systems across Docker—replacing product-specific implementations with shared platform capabilities for authentication, authorization, billing, and observability
  • Establish technical contracts between platform and product teams that enable independent velocity while ensuring consistency and reliability
What we offer
What we offer
  • Freedom & flexibility
  • fit your work around your life
  • Designated quarterly Whaleness Days plus end of year Whaleness break
  • Home office setup
  • we want you comfortable while you work
  • 16 weeks of paid Parental leave
  • Technology stipend equivalent to $100 net/month
  • PTO plan that encourages you to take time to do the things you enjoy
  • Training stipend for conferences, courses and classes
  • Equity
  • Fulltime
Read More
Arrow Right
New

Systems Analyst 3 - Data Engineer

Sammons Financial Group is seeking a Systems Analyst – Data Engineer to design, ...
Location
Location
United States , Sioux Falls; West Des Moines; Chicago
Salary
Salary:
82654.00 - 172197.00 USD / Year
sammonsfinancialgroup.com Logo
Sammons Financial Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • College Degree in the field of computer science, information science, management information systems Preferred
  • Minimum 8 years' IT development experience or equivalent Preferred
  • Effective verbal and written communications skills and the ability to communicate with business partners and other IT staff
  • Problem solving skills sufficient to perform research and recommend a proposed solution to problems
  • Able to work on multiple tasks and meet established deadlines
  • Able to effectively direct and coordinate the work of other team members on a project without having HR management responsibility for them
  • Knowledge of computer programming languages as required for the system
  • Criminal background check required
Job Responsibility
Job Responsibility
  • Design, develop, and implement scalable data ingestion, integration, and processing pipelines across cloud platforms (Azure, Snowflake/ and similar EDW/Lakehouse platforms , AWS)
  • Develop and manage data orchestration workflows using tools such as Azure Data Factory (ADF), Azure Data Lake (ADLS), dbt, and comparable technologies
  • Ingest and process large volumes of structured, semi-structured, and unstructured data, including compressed formats (e.g., .tar), and automate extraction, transformation, and loading processes
  • Design and implement modern data lakehouse architectures, including Iceberg (or similar table formats), to support scalable and high-performance analytics
  • Develop and maintain data models that accurately represent complex relationships within life insurance and policy administration domains
  • Integrate enterprise data platforms with internal and external systems (e.g., APIs, Kafka, MuleSoft) to enable real-time and batch data exchange
  • Collaborate with product owners, architects, analysts, and developers to translate business, functional, and non-functional requirements into scalable technical solutions
  • Establish and enforce data engineering standards, best practices, and governance controls across ingestion, transformation, and storage layers
  • Implement data quality validation, reconciliation processes, and error handling to ensure accuracy, consistency, and reliability of data pipelines
  • Monitor pipeline performance, reliability, scalability, and cost efficiency
What we offer
What we offer
  • Comprehensive health coverage for you and your family, including Medical, Dental, Vision, HSA & FSA options, and term life insurance
  • Competitive compensation with a performance-based incentive program tied to clear goals and individual and/or company success
  • Invest in your future with our 100% company-funded Employee Stock Ownership Plan (ESOP), plus automatic enrollment in our 401(k)
  • Work–life balance that means something. Friday afternoons off year-round, generous paid time off, and paid holidays
  • Commit to your growth with paid development time, tuition reimbursement, and professional development opportunities across industry, individual, and leadership programs
  • Make an impact beyond the workplace through volunteer time off, and our company nonprofit matching gift program, supporting the causes that matter most to you
  • An ownership culture that inspires
  • join a connected, values-driven workplace where employees take accountability, support one another, and are empowered to do their best work—together shaping our future shared success
  • Fulltime
Read More
Arrow Right