This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We’re not just building better tech. We’re rewriting how data moves and what the world can do with it. With Confluent, data doesn’t sit still. Our platform puts information in motion, streaming in near real-time so companies can react faster, build smarter, and deliver experiences as dynamic as the world around them. Cluster Linking is how Kafka clusters are replicated, and it sits at the very core of Confluent. When a global bank architects zero-data-loss disaster recovery, when an enterprise migrates petabytes off open-source Kafka, or when a multinational replicates event streams across three continents, they're running on Cluster Linking. This role will define the future of Cluster Linking: how we deliver seamless disaster recovery for Kafka workloads, how we build self-service migration paths into Confluent Cloud, and how we enable cross-organization data sharing to power real-time, operational data exchanges.
Job Responsibility:
Launch and drive adoption for our one-click DR initiative
Define the future of real-time stream sharing across organizations
Power capabilities that let customers migrate to Confluent in a few clicks or API calls
Segment and re-package Cluster Linking
Partner closely with your engineering manager counterpart
Work directly with customers
Drive results
Influence and energize the people around you
Requirements:
5+ years of product management experience, ideally in infrastructure, data platforms, or cloud products
Genuine interest in distributed systems and cloud infrastructure
A results-driven mindset
Proven ability to influence
Strong ownership mentality
Business thinking beyond the backlog
Clear communication
LLM or GenAI knowledge is a bonus, but not required