This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Scale’s ML platform (RLXF) team builds our internal distributed framework for large language model training and inference. The platform has been powering MLEs, researchers, data scientists and operators for fast and automatic training and evaluation of LLM's, as well as evaluation of data quality. Scale is uniquely positioned at the heart of the field of AI as an indispensable provider of training and evaluation data and end-to-end solutions for the ML lifecycle. You will work closely across Scale’s ML teams and researchers to build the foundation platform that supports all our ML research and development. You will be building and optimizing the platform to enable our next generation of LLM training, inference and data curation.
Job Responsibility:
Build, profile and optimize our training and inference framework
Collaborate with ML teams to accelerate their research and development and enable them to develop the next generation of models and data curation
Research and integrate state-of-the-art technologies to optimize our ML system
Requirements:
Strong excitement about system optimization
Experience with multi-node LLM training and inference
Experience with developing large-scale distributed ML systems
Strong software engineering skills, proficient in frameworks and tools such as CUDA, Pytorch, transformers, flash attention, etc.
Strong written and verbal communication skills and the ability to operate in a cross functional team environment
Nice to have:
Demonstrated expertise in post-training methods &/or next generation use cases for large language models including instruction tuning, RLHF, tool use, reasoning, agents, and multimodal, etc.