This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
You will work directly on customer engagements that generate revenue. This is hands-on technical work: fine-tuning Liquid Foundation Models (LFMs) for enterprise deployments across text, vision, and audio modalities. You will own technical delivery end-to-end, working with customers to understand their data and constraints, then hitting quality and latency targets on real hardware. This is not API wrapper work. You will fine-tune models, generate and curate training data, debug failure modes, and deploy to devices with real latency and memory constraints.
Job Responsibility:
Fine-tune LFMs on customer data to hit quality and latency targets for on-device and edge deployments
Generate and curate training data to address specific model failure modes
Run experiments, track metrics, and iterate until customer success criteria are met
Translate ambiguous customer requirements into concrete technical specifications
Provide analytics to commercial teams for contract structuring and pricing
Work across text, vision, and audio modalities as customer needs require
Requirements:
Hands-on fine-tuning experience with modern LLMs (last 12-18 months): LoRA, PEFT, DPO, instruction tuning, or similar
Strong ML fundamentals: you understand how models learn, fail, and improve
Experience generating or curating training data to address model gaps
Autonomous coding and debugging skills in Python and PyTorch
Proficiency with open-source ML ecosystem (Hugging Face transformers, datasets, accelerate)
Fine-tunes models: You have hands-on experience with techniques like LoRA, PEFT, DPO, instruction tuning, or RLHF. You've written training loops, not just API calls
Works with modern architectures: Your experience includes models released in the last 12-18 months (Llama 3.x, Mistral, Gemma, Qwen, etc.), not just BERT or classical ML
Generates and curates data: You've created synthetic training data to address specific model failure modes. You understand how data quality drives model performance
Debugs methodically: When a model underperforms, you diagnose whether it's a data problem, architecture problem, or training problem, and you fix it
Ships to customers: You can translate ambiguous customer requirements into concrete technical specs and deliver against quality metrics
Contributes to open source: You have a Hugging Face profile, PyPI packages, or OSS contributions that demonstrate depth, not just off-the-shelf usage
Nice to have:
Experience delivering ML work to external customers with measurable outcomes
Experience with inference optimization (vLLM, SGLang, TensorRT, llama.cpp)