This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Handshake AI builds the data engines that power the next generation of large language models. Our research team works at the intersection of cutting-edge model post-training, rigorous evaluation, and data efficiency. Join us for a focused Summer 2026 internship where your work can ship directly into our production stack and become a publishable research contribution. To start between May and June 2026.
LLM Evaluation: New multilingual, long-horizon, or domain-specific benchmarks
automatic vs. human preference studies
robustness diagnostics
Data Efficiency: Active-learning loops, data value estimation, synthetic data generation, and low-resource fine-tuning strategies
Each intern owns a scoped research project, mentored by a senior scientist, with the explicit goal of an archive-ready manuscript or top-tier conference submission
Requirements:
Current PhD student in CS, ML, NLP, or related field
Publication track record at top venues (NeurIPS, ICML, ACL, EMNLP, ICLR, etc.)
Hands-on experience training and experimenting with LLMs (e.g., PyTorch, JAX, DeepSpeed, distributed training stacks)
Strong empirical rigor and a passion for open-ended AI questions
Nice to have:
Prior work on RLHF, evaluation tooling, or data selection methods
Contributions to open-source LLM frameworks
Public speaking or teaching experience (we often host internal reading groups)