This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation. As a Software Engineer in Test for the ML API features team, you will be involved in testing AI/ML models for accuracy, fairness, and performance. You will play a pivotal role in bringing together and delivering all software and hardware components for Cerebras API Features. You will focus on SW components feature integration and quality and Pre-deployment/production validation for Cerebras inference solution. As part of this role, you will influence the best testing practice, good debugging methodology, effective cross team communication and advocate for world-class products.
Job Responsibility:
Understand new features end-to-end, and develop tests and tools to ensure quality
Contribute to industry standard benchmarks
Drive automation to improve internal efficiency
Understand trade off between coverage and resource requirements
Work in a highly agile environment where priorities change frequently
Effectively communicate across teams and timezones
Requirements:
2+ years of relevant industry experience in Software integration, development or quality
Strong automation and programming skills using one or more programming languages like Python, C++ or go
Experience in testing compute/machine learning/networking/storage systems within a large-scale enterprise environment
Experience in debugging issues across distributed scale out deployment
Experience working effectively across teams, including product development, product management, customer operations, and field teams
Excellent verbal and written communication skills
Strong organizational skills, teamwork, and can-do attitude
Experience working with geographically dispersed teams across time zones
Nice to have:
Experience in working with ML workloads such as LLM/Multimodal training or inference
Experience with hardware architecture, performance optimizations, compilers and ML frameworks
Experience working with distributed systems, cloud and security
Experience working with microservices deployment, debugging and orchestration
What we offer:
Build a breakthrough AI platform beyond the constraints of the GPU
Publish and open source their cutting-edge AI research
Work on one of the fastest AI supercomputers in the world
Enjoy job stability with startup vitality
Our simple, non-corporate work culture that respects individual beliefs