This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Tavus is a research lab pioneering human computing. We’re building AI Humans: a new interface that closes the gap between people and machines, free from the friction of today’s systems. Our real-time human simulation models let machines see, hear, respond, and even look real—enabling meaningful, face-to-face conversations. AI Humans combine the emotional intelligence of humans with the reach and reliability of machines, making them capable, trusted agents available 24/7, in every language, on our terms. Imagine a therapist anyone can afford. A personal trainer that adapts to your schedule. A fleet of medical assistants that can give every patient the attention they need. With Tavus, individuals, enterprises, and developers can all build AI Humans to connect, understand, and act with empathy at scale. We’re a Series A company backed by world-class investors including Sequoia Capital, Y Combinator, and Scale Venture Partners. Be part of shaping a future where humans and machines truly understand each other.
Job Responsibility:
Conduct research on large language modeling and adaptation for Conversational Avatars (e.g. Neural Avatars, Talking-Heads)
Develop methods to model both verbal and non-verbal aspects of conversation, adapting and controlling avatar behavior in real time
Experiment with fine-tuning, adaptation, and conditioning techniques to make LLMs more expressive, controllable, and task-specific
Partner with the Applied ML team to take research from prototype to production
Stay up to date with cutting-edge advancements — and help define what comes next
Requirements:
A PhD (or near completion) in a relevant field, or equivalent research experience
Hands-on experience with LLMs or VLMs and a strong foundation in generative language models
Experience in fine-tuning/adapting LLMs for control, conditioning, or downstream tasks
Solid background in deep learning and familiarity with foundation model methods
Strong PyTorch skills and comfort building deep learning pipelines
Nice to have:
Knowledge of large-scale model training and optimization
Broader understanding of generative AI across modalities
Exposure to software development best practices
A flexible, experimental mindset i.e. comfortable working across research and engineering
(Bonus) Publications at EMNLP, COLING, NeurIPS, ICLR, CVPR, ICCV