This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Together AI is building the Inference Platform that powers the world's most advanced generative AI models. Your role will be a critical bridge between cutting-edge research and real-world applications, focusing on making translating our internal model training research to production-ready deployment for our customers. This involves a deep commitment to data-centric development, meticulous hyperparameter tuning, and rigorous checkpoint evaluation before models ever hit production. This role will involve understanding customer specific needs and fine-tuning models on our internal data recipe and their proprietary data. The goal is to transform general-purpose models into highly performant, specialized tools that solve real business problems. You will not be training foundation models from scratch but rather focusing on creating highly efficient, specialized models by working with dedicated GPU clusters.
Job Responsibility:
Design and iterate on novel speculator algorithms, combining architectural innovations with carefully curated data to push the frontier of accuracy–efficiency tradeoffs
Be the critical link between raw data and a production-ready model, seeing your work directly impact our customers' success
Work in a fast-paced, high-impact role at the cutting edge of generative AI
Collaborate with a team of experts dedicated to solving real-world, high-performance challenges
Collaborate directly with customers to understand their needs, and work closely with our core inference and Applied ML research teams to integrate your work into the production platform
A culture of deep technical ownership where you are empowered to take on and solve challenging problems
Requirements:
A genuine love for data curation and processing, with a meticulous attention to detail
Demonstrated ability to perform effective hyperparameter searches and understand the trade-offs involved in tuning models for specific tasks
Experience working with and building on top of existing training codebases
Strong attention-to-detail in evaluating model checkpoints to ensure they meet strict quality, performance, and reliability standards
Experience with Python and PyTorch
Familiarity with SLURM and/or Kubernetes clusters and experience submitting and managing jobs in a high-performance computing environment
Familiarity with modern LLMs and generative models
Basic understanding of distributed training frameworks (e.g., FSDP, DeepSpeed)
Bachelor’s, Master’s degree, or Ph.D. in Computer Science, Computer Engineering, or a related field, or equivalent practical experience