This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Applied Researcher role is designed for engineers who love working across ML, systems, and real-world products, and thrive on working directly with customers to bring advanced models into production.
Job Responsibility:
Sit at the intersection of ML research, systems engineering, and customer-facing problem solving
Work hands-on with customers and customer data to tune, evaluate and deploy models using various techniques such as SFT / DPO / RL
Help customers build competitive models using their unique data tailored to their unique products
Be the technical bridge between customer needs, customer data, and our tuning and serving infrastructure
Requirements:
BS/MS in Computer Science, Electrical Engineering, Machine Learning, or a related field, or equivalent practical experience, open to all levels of experiences
Strong experience with PyTorch and modern Transformer architectures
Familiarity with recent developments in the LLM research domain, including model architectures, training methods, and evaluation strategies
Passion for partnering with customers: understanding their constraints, co-designing solutions, and iterating based on real-world feedback
Curiosity and enthusiasm for exploring a wide range of problem domains and project types - from quick experiments to long-running, complex engagements
Ability to operate in a fast-paced, ambiguous environment and drive projects independently
Nice to have:
Experience working directly with customers to deliver end-to-end modeling solutions, from understanding their data and product requirements to deploying tuned models in production
Strong familiarity with evaluation methodologies for LLMs (benchmarks, custom evals, error analysis)
Proficiency in diagnosing system-wide problems that hinder customers from achieving desirable outcomes
Deep understanding of tuning techniques (SFT, DPO, RL) and the underlying mathematical principles
Knowledge of infrastructural components that enterprises commonly use, such as databricks, S3/GCS storage, SageMaker, artifact registry etc
Familiarity with cloud-native tooling (containers, Docker, Kubernetes, or similar)
What we offer:
Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure
Build What’s Next: Work with bleeding-edge technology that impacts how businesses and developers harness AI globally
Ownership & Impact: Join a fast-growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results
Learn from the Best: Collaborate with world-class engineers and AI researchers who thrive on curiosity and innovation