This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Would you like to participate in creating the fastest Generative Models inference in the world? Join the Cerebras Inference Team to participate in development of unique Software and Hardware combination that sports best inference characteristics in the market while running largest models available. Cerebras wafer scale inference platform allows running Generative models with unprecedented speed thanks to unique hardware architecture that provides fastest access to local memory, ultra-fast interconnect and huge amount of available compute. You will be part of the team that works with latest open and closed generative AI models to optimize for the Cerebras inference platform. Your responsibilities will include working on model representation, optimization and compilation stack to produce the best results on Cerebras current and future platforms.
Job Responsibility:
Analysis of new models from generative AI field and understanding of impacts on compilation stack
Develop and maintain model definition framework that consists of model building blocks to represent large language models based on PyTorch and Cerebras dialects ready to be deployed on Cerebras hardware
Develop and maintain the frontend compiler infrastructure that ingests PyTorch models and produces an intermediate representation (IR)
Extend and optimize PyTorch FX / TorchScript / TorchDynamo-based tooling for graph capture, transformation, and analysis
Collaboration with other teams throughout feature implementation
Research on new methods for model optimization to improve Cerebras inference
Requirements:
Degree in Engineering, Computer Science, or equivalent in experience and evidence of exceptional ability
Strong Python programming skills and in-depth experience with PyTorch internals (e.g., TorchScript, FX, or Dynamo)
Solid understanding of computational graphs, tensor operations, and model tracing
Experience building or extending compilers, interpreters, or ML graph optimization frameworks
Experience working with PyTorch and HuggingFace Transformers library
Knowledge and experience working with Large Language Models (understanding Transformer architecture variations, generation cycle, etc.)
Strong C++ programming skills
Knowledge of MLIR based compilation stack
Nice to have:
Prior experience contributing to PyTorch, TensorFlow XLA, TVM, ONNX RT, or similar compiler stacks
Knowledge of hardware accelerators, quantization, or runtime scheduling
Experience with multi-target inference compilation (e.g., CPU, GPU, custom ASICs)
Understanding of numerical precision trade-offs and operator lowering
Contributions to open-source ML compiler projects
What we offer:
Build a breakthrough AI platform beyond the constraints of the GPU
Publish and open source their cutting-edge AI research
Work on one of the fastest AI supercomputers in the world
Enjoy job stability with startup vitality
Our simple, non-corporate work culture that respects individual beliefs