This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. This Sr. TPM role owns site and data center operations programs supporting Cerebras’ AI Cloud and customer deployments. The position sits at Sunnyvale HQ and works closely with Hardware Engineering, Inference Engineering, and Operations leadership to ensure Cerebras systems are reliably deployed, operated, and scaled. This is a highly technical, execution-focused TPM role with strong emphasis on operational readiness, cross-functional coordination, and metrics/KPIs.
Job Responsibility:
Own end-to-end technical programs for data center and site operations
Act as single-threaded owner across: Hardware & Systems Engineering
AI Cloud Infrastructure & Operations
Network & Storage Engineering
Facilities, power, cooling, and colo partners
Drive site readiness for Cerebras Wafer-Scale Engine systems
Partner on installation, commissioning, change management, and break/fix workflows
Lead incident reviews and postmortems
ensure corrective actions are closed
Define and own operational metrics and KPIs, including: Availability and reliability
Incident rate, severity, MTTR / MTTD
Deployment readiness and time-to-service
Capacity and operational risk
Build executive-level dashboards and reporting
Establish program governance, risk tracking, and RACI clarity
Present program status, metrics, and operational risks to senior leadership
Requirements:
8+ years in Technical Program Management, Infrastructure Ops, or Data Center Ops
Experience leading large, cross-functional infrastructure programs
Strong understanding of: Data center power and cooling fundamentals
Network and storage basics
Hardware-centric platforms
Proven ability to define and operationalize metrics
Strong written and executive-level communication skills
Nice to have:
AI/ML, HPC, or accelerator-based infrastructure
High-density and/or liquid-cooled data centers
Working with colocation providers and facilities teams
Incident management, reliability, or service operations background
What we offer:
Build a breakthrough AI platform beyond the constraints of the GPU
Publish and open source their cutting-edge AI research
Work on one of the fastest AI supercomputers in the world
Enjoy job stability with startup vitality
Our simple, non-corporate work culture that respects individual beliefs