This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Performance Optimization team at Luma is dedicated to maximizing the efficiency and performance of our AI models. Working closely with both research and engineering teams, this group ensures that our cutting-edge multimodal models can be trained efficiently and deployed at scale while maintaining the highest quality standards.
Job Responsibility:
Profile and optimize GPU/CPU/Accelerator code for maximum utilization and minimal latency
Write high-performance PyTorch, Triton, CUDA, deferring to custom PyTorch operations if necessary
Develop fused kernels and leverage tensor cores and modern hardware features for optimal hardware utilization on different hardware platforms
Optimize model architectures and implementations for distributed multi-node production deployment
Build performance monitoring and analysis tools and automation
Research and implement cutting-edge optimization techniques for transformer model
Requirements:
Expert-level proficiency in Triton/CUDA programming and GPU optimization
Strong PyTorch skills
Experience with PyTorch kernel development and custom operations
Proficiency with profiling tools (NVIDIA Nsight, torch profiler, custom tooling)
Deep understanding of transformer architectures and attention mechanisms
Nice to have:
Experience with compilers/exporters such as torch.compile, TensorRT, ONNX, XLA
Experience optimizing inference workloads for latency and throughput
Experience with Triton compiler and kernel fusion techniques
Knowledge of warp-level intrinsics and advanced CUDA optimization