This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a core member of the team, you will play a pivotal role in optimizing and developing deep learning frameworks for AMD GPUs. Your experience will be critical in enhancing GPU kernels, deep learning models, and training/inference performance across multi-GPU and multi-node systems. You will engage with both internal GPU library teams and open-source maintainers to ensure seamless integration of optimizations, utilizing cutting-edge compiler technologies and advanced engineering principles to drive continuous improvement.
Job Responsibility:
Build and optimize end to end distributed inference (e.g, P/D disaggregation and Large-EP) and RL solutions on mainstream frameworks like vLLM and SGlang
Collaborate with internal GPU library teams to analyze and improve training and inference performance on AMD GPUs
Engage with framework maintainers to ensure code changes are aligned with requirements and integrated upstream
Optimize deep learning performance on both scale-up (multi-GPU) and scale-out (multi-node) systems
Leverage advanced compiler technologies to improve deep learning performance
Enhance the full pipeline, including integrating graph compilers
Apply sound engineering principles to ensure robust, maintainable solutions
Requirements:
Bachelor's and/or Master's in Computer Science, Computer Engineering, Electrical Engineering, or related fields
3+ years of professional experience in technical software development, with a focus on GPU optimization, performance engineering, and framework development