This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a Site Reliability Engineer to join the Model Serving team at Cohere. The team is responsible for developing, deploying, and operating the AI platform delivering Cohere's large language models through easy to use API endpoints. In this role, you will work closely with many teams to deploy optimized NLP models to production in low latency, high throughput, and high availability environments. You will also get the opportunity to interface with customers and create customized deployments to meet their specific needs.
Job Responsibility:
Build self-service systems that automate managing, deploying and operating services
This includes our custom Kubernetes operators that support language model deployments
Automate environment observability and resilience. Enable all developers to troubleshoot and resolve problems
Take steps required to ensure we hit defined SLOs, including participation in an on-call rotation
Build strong relationships with internal developers and influence the Infrastructure team’s roadmap based on their feedback
Develop our team through knowledge sharing and an active review process
Requirements:
5+ years of engineering experience running production infrastructure at a large scale
Experience designing large, highly available distributed systems with Kubernetes, and GPU workloads on those clusters
Experience with Kubernetes dev and production coding and support
Experience in designing, deploying, supporting, and troubleshooting in complex Linux-based computing environments
Experience in compute/storage/network resource and cost management
Excellent collaboration and troubleshooting skills to build mission-critical systems, and ensure smooth operations and efficient teamwork
The grit and adaptability to solve complex technical challenges that evolve day to day
Familiarity with computational characteristics of accelerators (GPUs, TPUs, and/or custom accelerators), especially how they influence latency and throughput of inference
Strong understanding or working experience with distributed systems
Experience in Golang, C++ or other languages designed for high-performance scalable servers
What we offer:
An open and inclusive culture and work environment
Work closely with a team on the cutting edge of AI research
Weekly lunch stipend, in-office lunches & snacks
Full health and dental benefits, including a separate budget to take care of your mental health
100% Parental Leave top-up for up to 6 months
Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement
Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend