This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Inference team develops and enhances the inference stack for serving H-models that power our agent technology. The team focuses on optimizing hardware utilization to reach high throughput, low latency and cost efficiency in order to deliver a seamless user experience.
Job Responsibility:
Develop scalable, low-latency and cost effective inference pipelines
Optimize model performance: memory usage, throughput, and latency, using advanced techniques like distributed computing, model compression, quantization and caching mechanisms
Develop specialized GPU kernels for performance-critical tasks like attention mechanisms, matrix multiplications, etc.
Collaborate with H research teams on model architectures to enhance efficiency during inference
Review state-of-the-art papers to improve memory usage, throughput and latency (Flash attention, Paged Attention, Continuous batching, etc.)
Prioritize and implement state-of-the-art inference techniques
Requirements:
MS or PhD in Computer Science, Machine Learning or related fields
Proficient in at least one of the following programming languages: Python, Rust or C/C++
Experience in GPU programming such as CUDA, Open AI Triton, Metal, etc.
Experience in model compression and quantization techniques
Collaborative mindset, thriving in dynamic, multidisciplinary teams
Strong communication and presentation skills
Eager to explore new challenges
Nice to have:
Experience with LLM serving frameworks such as vLLM, TensorRT-LLM, SGLang, llama.cpp, etc.
Experience with CUDA kernel programming and NCCL
Experience in deep learning inference framework (Pytorch/execuTorch, ONNX Runtime, GGML, etc.)
What we offer:
Join the exciting journey of shaping the future of AI, and be part of the early days of one of the hottest AI startups
Collaborate with a fun, dynamic and multicultural team, working alongside world-class AI talent in a highly collaborative environment
Enjoy a competitive salary
Unlock opportunities for professional growth, continuous learning, and career development