This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
In this role, as part of our team, you will have a unique opportunity to work with industry-leading clients, leveraging the latest hardware capabilities to drive innovation and optimize AI workload performance across Radeon GPUs. You will be among the first to integrate new hardware with the latest applications, libraries, frameworks, and SDKs, solving complex challenges and pushing the boundaries of AI technology. Join us in our mission to enable and optimize ROCm on Radeon solutions and the ecosystem.
Job Responsibility:
Design, develop, and optimize AMD ROCm on Radeon solutions for AI inference
Collaborate with cross-functional teams to integrate and validate new features and enhancements
Conduct performance analysis and optimization to ensure high efficiency and scalability of developed solutions
Stay updated with the latest advancements in AI, machine learning, and GPU programming to drive innovation within the team
Mentor junior engineers and contribute to the continuous improvement of development processes and best practices
Requirements:
20+ years of experience in software development, systems engineering, customer or partner-facing technical roles
Strong proficiency in C++ and GPU programming languages such as CUDA, HIP, or OpenCL
Strong proficiency in different computer architectures (x86 or ARM) and related peripheral devices like GPU, Encoder, NIC, FPGA
Experience with AI training and inference frameworks and tools
Proven track record of developing high-performance, scalable software solutions
Familiarity with the AI inference solution (Model -> inference engine -> runtime -> GPU -> server -> cluster) is a plus
Excellent problem-solving skills and the ability to work effectively in a team environment
Bachelor's, Master's, or Ph.D. degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent
Nice to have:
Familiarity with the AI inference solution (Model -> inference engine -> runtime -> GPU -> server -> cluster)