This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Our team makes PyTorch run faster and more resource-efficient without sacrificing the flexibility and ease of use of PyTorch. Team scope: - Advance PyTorch 2.0 technologies that bring torch.compile() to the heart of PyTorch - Advance PyTorch Distributed through torch.compile() - Improve PyTorch out-of-the-box performance on GPU, CPU, accelerators - Vertical performance optimization for models for training and inference Our team at Meta AI offers twelve (12) to sixteen (24) weeks long internships and we have various start dates throughout the year.
Job Responsibility:
Develop new techniques in TorchDynamo, TorchInductor, PyTorch core, PyTorch Distributed
Explore the intersection of PyTorch compiler and PyTorch Distributed
Optimize Generative AI models across the stack (pre-training, fine-tuning, and inference)
Improve general PyTorch performance
Conduct cutting-edge research on ML compiler and ML distributed technologies
Collaborate with users of PyTorch to enable new use cases for the framework both inside and outside Meta
Requirements:
Currently has or is in the process of obtaining a PhD degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
Experience in ML compiler, Distributed Training, ML systems, or similar
Proficient in Python or Cuda programming
Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment
Nice to have:
Experience working on other ML compiler stack, especially on PT2 stack or Triton
Experience doing performance optimization on machine learning models
Intent to return to degree program after the completion of the internship/co-op
Proven track record of achieving significant results as demonstrated by grants, fellowships, patents, as well as first-authored publications at leading workshops or conferences such as NeurIPS, MLSys, ASPLOS, PLDI, CGO, PACT, ICML, or similar
Experience working and communicating cross functionally in a team environment