This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Senior Staff Product Manager for Apache Iceberg & Open Table Formats will own the strategy, community growth, and technical execution of Teradata’s Iceberg capabilities. This is a dual-charter role spanning outbound community and ecosystem leadership alongside inbound product and performance engineering. You will serve as Teradata’s primary voice in the Apache Iceberg open-source ecosystem while simultaneously driving deep integration of Iceberg into the Teradata platform to deliver world-class lakehouse performance. This PM owns the end-to-end roadmap for Iceberg-related capabilities and leads cross-functional prioritization across engineering, developer relations, and go-to-market teams.
Job Responsibility:
Serve as Teradata’s primary representative and advocate within the Apache Iceberg open-source community, building trust and credibility with contributors, committers, and the Apache Software Foundation (ASF)
Develop and execute a developer community strategy that grows Teradata’s mindshare among data engineers, lakehouse architects, and open-source contributors working with Apache Iceberg
Build and nurture relationships with the Apache Software Foundation, including participation in Iceberg PMC discussions, contributing to project governance, and representing Teradata’s interests in community roadmap decisions
Drive external thought leadership through conference talks (e.g., ApacheCon, Data + AI Summit, Subsurface), blog posts, technical papers, and social media engagement on Iceberg-related topics
Collaborate with partners and ecosystem vendors (e.g., cloud providers, compute engine vendors, catalog providers) to ensure Teradata’s Iceberg implementation is interoperable and well-positioned in the broader lakehouse ecosystem
Create and maintain developer-facing content including tutorials, reference architectures, and best-practice guides for using Iceberg with Teradata
Own the product roadmap for Iceberg integration within the Teradata platform, covering Iceberg read/write operations, catalog interoperability, metadata management, schema evolution, and partition optimization
Partner closely with database engineering teams to identify and drive performance improvements for Iceberg workloads, including query planning, predicate pushdown, data skipping, compaction, and table maintenance operations
Define product requirements for Iceberg-native capabilities such as time travel, snapshot isolation, branching and tagging, and hidden partitioning within the Teradata ecosystem
Conduct competitive analysis of Iceberg implementations across the industry (e.g., Snowflake, Databricks, Dremio, Cloudera) and translate insights into prioritized product investments
Work with engineering to ensure Teradata’s Iceberg support delivers strong ACID guarantees, multi-engine interoperability, and seamless integration with both batch and streaming workloads
Define and track performance benchmarks and success metrics for Iceberg-based operations, including query latency, throughput, table maintenance efficiency, and storage optimization
Lead collaboration across engineering, developer relations, product marketing, and field teams to align Iceberg strategy with Teradata’s broader lakehouse and open-data platform vision
Drive prioritization and sequencing of work to balance community credibility, technical feasibility, competitive positioning, and speed of delivery
Operate effectively within a matrixed organization, influencing outcomes through clarity, data, and alignment
Apply foundational AI skills to explore and implement ways AI can enhance productivity, innovation, and impact across our workforce
Requirements:
12+ years of product management experience in data infrastructure, databases, data platforms, or analytics, with a track record of shipping platform-level capabilities
Deep familiarity with Apache Iceberg or comparable open table formats (Apache Hudi, Delta Lake), including understanding of metadata design, catalog architecture, and query optimization
Demonstrated experience in open-source community engagement—contributing to or leading initiatives within Apache Software Foundation projects or similar open-source ecosystems
Strong technical fluency in lakehouse architecture, including concepts like ACID transactions on object storage, schema evolution, partition evolution, snapshot isolation, and compute-storage separation
Proven ability to lead cross-functional teams across engineering, developer relations, and go-to-market functions
Hands-on experience developing agentic AI systems and successfully bringing agent-driven solutions from concept to market
Foundational AI skills and the ability to understand how AI can be applied to improve outcomes in your area of expertise
Nice to have:
Experience building and growing developer communities around data infrastructure or open-source technologies
Background in database performance engineering, query optimization, or storage engine internals
Familiarity with Iceberg ecosystem tooling including catalogs (e.g., Nessie, Hive Metastore, Unity Catalog, Polaris), compute engines (e.g., Spark, Trino, Flink), and table maintenance services
Track record of public thought leadership—conference speaking, technical blogging, or published contributions to data engineering discourse
Experience working with enterprise customers on lakehouse adoption, migration strategies, or open data architecture
Understanding of competitive landscape across Snowflake, Databricks, Dremio, Cloudera, and cloud-native analytics services
A passion for how AI can unlock potential to help our teams, our customers, and our communities achieve great things
What we offer:
We prioritize a people-first culture
We embrace a flexible work model
We focus on well-being
We are an anti-racist company
We foster an equitable environment that celebrates people for all of who they are