This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Machine Learning Systems Administrator - HPC Infrastructure, you will be responsible for maintaining and developing the core infrastructure behind our machine learning research and production efforts. You’ll work closely with various training and inference teams to ensure the smooth operation of our systems while laying the groundwork for scalable, secure, and efficient workflows. You’ll have a significant impact on both developer productivity and training and inference performance.
Job Responsibility:
Maintaining and developing the core infrastructure behind our machine learning research and production efforts
Administration and automation of our Linux-based cluster environments
Managing user onboarding/offboarding, security auditing, and access control
Monitoring system resources and job scheduling
Supporting and improving developer workflows (e.g., VSCode compatibility, Docker)
Enabling and supporting AI/ML workloads, including large-scale training jobs
Requirements:
Strong experience with Linux system administration, user and access management, and automation
Demonstrated expertise in scripting languages for system tooling and automation (bash, Python, etc.)
Familiarity with containerized environments (e.g., Docker) and job scheduling systems like Slurm
Experience building tooling for cluster validation and reliability (GPU, networking, storage health checks)
Experience setting up and managing developer tools and third-party services (e.g, Cloud storage providers, Dockerhub, Slack, Gmail, Telegraf, experiment trackers, etc.)
Excellent debugging and troubleshooting skills across compute, storage, and networking
Strong communication skills and ability to collaborate across technical and non-technical teams
Nice to have:
Experience with infrastructure as code (e.g., Ansible, Terraform)
Prior work supporting ML/AI infrastructure, including GPU management and workload optimization
Exposure to backend development for ML model serving (e.g., vLLM, Ray, SGLang)
Experience working with cloud platforms such as AWS, Azure, or GCP
Familiarity with containers (Docker, Apptainer) and their integration with scheduling systems (Slurm, Kubernetes)
What we offer:
Comprehensive medical, dental, vision, and FSA plans
Competitive compensation and 401(k)
Relocation and immigration support on a case-by-case basis
On-site meals prepared by a dedicated culinary team