This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Join Notion’s Data Platform team as we scale our infrastructure for enterprise customers. You’ll help design and build the core data platform that powers Notion’s AI, analytics, and search while meeting stringent security, privacy, and compliance requirements. This role focuses on the data platform layer (storage, compute, pipelines, governance) and partners closely with Security, Search Platform, AI, and Data Engineering.
Job Responsibility:
Design and evolve the data lakehouse
Build and operate core lakehouse components (e.g., Iceberg/Hudi/Delta tables, catalogs, schema management) that serve as the source of truth for analytics, AI, and search
Own critical data pipelines and services
Design, implement, and harden batch and streaming pipelines (Spark, Kafka, EMR, etc.) that move and transform data reliably across regions and cells
Advance EKM and encryption-by-design
Work with Security and platform teams to integrate Enterprise Key Management (EKM) into data workflows, including file- and record-level encryption and safe key handling in Spark and storage systems
Improve data access, auditability, and residency
Build primitives for fine-grained access control, auditing, and data residency so customers can see who accessed what, where, and under which guarantees
Drive reliability and observability
Raise the operational bar for our data stack: improve on-call experience, debugging, and alerting for data jobs and services
Optimize large-scale performance and cost
Tackle performance and cost challenges across Kafka, Spark, and storage for very large workspaces (20k+ users, multi-cell deployments), including cluster migrations and workload tuning
Enable ML and search workflows
Build infrastructure to support training and inference pipelines, ranking workflows, and embedding infrastructure on top of the shared data platform
Shape the platform roadmap
Contribute to design docs and evaluations that influence our long-term platform direction and vendor choices
Requirements:
5+ years building and operating data platforms or large-scale data infrastructure for SaaS or similar environments
Strong skills in at least one of Python, Java, or Scala
comfortable working with SQL for analytics and data modeling
Hands-on experience with Spark or similar distributed processing systems, including debugging and performance tuning
Experience with Kafka or equivalent streaming systems
familiarity with CDC/ingestion patterns (e.g., Debezium, Fivetran, custom connectors)
Experience with data lakes and table formats (Iceberg, Hudi, or Delta) and/or data catalogs and schema evolution
Practical understanding of access control, encryption at rest/in transit, and auditing as they apply to data platforms
Experience with at least one major cloud provider (AWS, GCP, or Azure) and managed data/compute services (e.g., EMR, Dataproc, Kubernetes-based compute)
Comfortable owning services and pipelines in production, including on-call, incident response, and reliability improvements
Curious and willing to adopt AI tools to work smarter and deliver better results
Nice to have:
Experience working directly with enterprise customers or on features like data residency, EKM, or compliance-driven auditing
Prior work on Databricks, Unity Catalog, Lake Formation, or similar catalog/governance systems
Background implementing multi-region / multi-cell data architectures
Experience building ML training/eval workflows or model/feature stores on top of a shared data platform
Familiarity with vector databases or search infrastructure, and how they integrate with upstream data systems
Experience designing or improving observability for data platforms (e.g., Honeycomb, OpenTelemetry, metrics/trace-heavy debugging)
What we offer:
Highly competitive cash compensation, equity, and benefits