This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Job Responsibility:
Lead the design and implementation of system-level debugging, validation, and observability platforms
Develop automated systems for collecting and analyzing numerical, and execution anomalies
Create visualization and analysis tools to enable efficient root-cause investigation
Build frameworks for failure classification, regression detection, and anomaly monitoring
Extend compilers, runtimes, and programming interfaces to support advanced profiling and instrumentation
Improve system bring-up, low-level debug, and validation workflows
Partner cross-functionally with compiler, hardware, firmware, runtime, and infrastructure teams
Establish best practices for debuggability, reliability, and operational excellence
Lead high-impact initiatives
Support incident response and drive long-term corrective actions
Requirements:
Strong proficiency in C++ and Python, with a track record of building reliable, high-performance systems and tooling
Demonstrated experience debugging complex hardware/software systems and driving issues to root cause
Experience analyzing system-level data structures, execution graphs, or dependency networks for diagnostics and validation
Proven ability to design and build intuitive visualization and analysis tools for complex technical data
Experience with compiler internals, custom hardware interfaces, or low-level protocol design
Strong written and verbal communication skills, with the ability to explain technical concepts to diverse stakeholders
Ability to work independently and lead complex technical projects end-to-end
Nice to have:
Familiarity with machine learning training and inference pipelines, especially distributed training and large-model scaling
Prior work on high-performance clusters, HPC systems, or custom hardware/software co-design
What we offer:
Build a breakthrough AI platform beyond the constraints of the GPU
Publish and open source their cutting-edge AI research
Work on one of the fastest AI supercomputers in the world
Enjoy job stability with startup vitality
Our simple, non-corporate work culture that respects individual beliefs