This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Meta is seeking Research Scientist Interns in the multimodal pretraining team in the Meta Superintelligence org. We are committed to advancing the field of generative AI by making fundamental advances in technologies to help interact with and understand our world. We’re looking for candidates who want to push the frontiers of multimodal pretraining, and have experience in developing novel language modeling architectures, vision-language modeling, multimodal generation, and neural scaling laws. Our team offers twelve (12) to twenty-four (24) weeks long internships and we have various start dates throughout the year.
Job Responsibility:
Perform research to advance the frontiers of multimodal (images, video, text, audio, and other modalities) pretraining, to develop the next generation of multimodal architectures
Collaborate with researchers and cross-functional partners including communicating research plans, progress, and results
Publish research results and contribute to research that can be applied to Meta product development
Requirements:
Currently has or is in the process of obtaining a Ph.D. degree in Computer Science, Machine Learning, Computer Vision, Artificial Intelligence, or relevant technical field
Past projects/publications in the general domain of neural scaling laws, model architectures, image/text modeling, vision-language modeling
Must obtain work authorization in the country of employment at the time of hire and maintain ongoing work authorization during employment
Experience in PyTorch, Triton, or other related programming languages
Experience building systems based on machine learning and/or deep learning methods
Nice to have:
Intent to return to a degree-program after the completion of the internship/co-op
Proven track record of achieving significant results as demonstrated by grants, fellowships, patents, as well as publications at leading workshops or conferences such as NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, and ACL
Experience with Generative AI, including transformers, diffusion models and VLMs
Background in advancing AI techniques at the intersection of foundation model training including core contributions to open source libraries and frameworks
Publications or experience in machine learning, AI, computer vision, NLP, optimization, computer science, statistics, applied mathematics, or data science
Experience solving analytical problems using quantitative approaches
Experience setting up ML experiments and analyzing their results
Experience manipulating and analyzing complex, large scale, high-dimensionality data from varying sources
Experience in utilizing theoretical and empirical research to solve problems
Experience working and communicating cross functionally in a team environment