This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Liquid Labs gives that work a formal home; an internal research accelerator driving fundamental breakthroughs in the science of building intelligent, personalized, and adaptive machines. Our origins trace back to MIT CSAIL, where the foundational work on Liquid Neural Networks defined a new class of dynamical, efficient sequence-processing architectures. That research became the basis for Liquid Foundation Models (LFMs). Scalable, multimodal models built for real-world deployment in resource-constrained environments. At Liquid Labs, we extend that lineage - pushing forward the frontier of efficient, adaptive intelligence through both fundamental research and practical engineering. We work hand-in-hand with Liquid’s core foundation model and systems teams to translate theory into deployed capability — defining a new generation of intelligent systems that are both powerful and efficient.
Job Responsibility:
Design and implement novel architectures, training methods, and inference strategies to redefine what efficient AI can do
Operate at the intersection of research and engineering — translating scientific ideas into working systems, publishing where it drives the field forward, and deploying where it changes what’s possible
Requirements:
Work fluently in Python and frameworks such as PyTorch, JAX, or TensorFlow
Have experience in machine learning research or production-grade ML systems
Move fast from paper to prototype — curiosity backed by precision
Care about efficiency, scalability, and elegant system design as scientific principles
Value small, deep-technical teams where impact is immediate and measurable
Have a track record of publication in tier-1 venues (NeurIPS, ICML, ICLR, CVPR, ACL, or equivalent), demonstrating original contribution and research rigor