CrawlJobs Logo

Member of Technical Staff, GPU Optimization

runwayml.com Logo

Runway

Location Icon

Location:
United States

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

260000.00 - 325000.00 USD / Year

Job Description:

We are building AI to simulate the world through merging art and science. We believe that world models are at the frontier of progress in artificial intelligence. Language models alone won’t solve the world’s hardest problems – robotics, disease, scientific discovery. Real progress requires models that experience the world and learn from their mistakes, the same way that humans do. And this kind of trial and error can be massively accelerated when done in simulation, rather than in the real world. World models offer the most clear path to general-purpose simulation, changing how stories are told, how scientific progress is made and how the next frontiers of humanity are reached.

Job Responsibility:

  • Develop innovative research projects in computer vision, focusing on generative models for image and video
  • Work with a world-class engineering team pushing the boundaries of content creation on the browser
  • Collaborate closely with the rest of the product organization to bring cutting-edge machine learning models to production

Requirements:

  • 5+ years of relevant engineering or research experience in machine learning, computer vision and/or graphics
  • Experience with CUDA, C++ and systems level performance optimizations
  • Solid knowledge of at least one machine learning framework (e.g. PyTorch, Tensorflow)
  • Very strong programming skills and ability to write clean and maintainable research code
  • Deep interest in building human-in-the-loop systems for creativity
  • Ability to rapidly prototype solutions and iterate on them with tight product deadlines
  • Strong communication, collaboration, and documentation skills

Additional Information:

Job Posted:
December 11, 2025

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Member of Technical Staff, GPU Optimization

Member of Technical Staff, Performance Optimization

We're looking for a Software Engineer focused on Performance Optimization to hel...
Location
Location
United States , San Mateo
Salary
Salary:
175000.00 - 220000.00 USD / Year
fireworks.ai Logo
Fireworks AI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent practical experience
  • 5+ years of experience working on performance optimization or high-performance computing systems
  • Proficiency in CUDA or ROCm and experience with GPU profiling tools (e.g., Nsight, nvprof, CUPTI)
  • Familiarity with PyTorch and performance-critical model execution
  • Experience with distributed system debugging and optimization in multi-GPU environments
  • Deep understanding of GPU architecture, parallel programming models, and compute kernels
Job Responsibility
Job Responsibility
  • Optimize system and GPU performance for high-throughput AI workloads across training and inference
  • Analyze and improve latency, throughput, memory usage, and compute efficiency
  • Profile system performance to detect and resolve GPU- and kernel-level bottlenecks
  • Implement low-level optimizations using CUDA, Triton, and other performance tooling
  • Drive improvements in execution speed and resource utilization for large-scale model workloads (LLMs, VLMs, and video models)
  • Collaborate with ML researchers to co-design and tune model architectures for hardware efficiency
  • Improve support for mixed precision, quantization, and model graph optimization
  • Build and maintain performance benchmarking and monitoring infrastructure
  • Scale inference and training systems across multi-GPU, multi-node environments
  • Evaluate and integrate optimizations for emerging hardware accelerators and specialized runtimes
What we offer
What we offer
  • Meaningful equity in a fast-growing startup
  • Competitive salary
  • Comprehensive benefits package
  • Fulltime
Read More
Arrow Right

Member of Technical Staff - GPU Infrastructure

Prime Intellect is building the open superintelligence stack - from frontier age...
Location
Location
United States , San Francisco
Salary
Salary:
Not provided
Prime Intellect
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years hands-on experience with GPU clusters and HPC environments
  • Deep expertise with SLURM and Kubernetes in production GPU settings
  • Proven experience with InfiniBand configuration and troubleshooting
  • Strong understanding of NVIDIA GPU architecture, CUDA ecosystem, and driver stack
  • Experience with infrastructure automation tools (Ansible, Terraform)
  • Proficiency in Python, Bash, and systems programming
  • Track record of customer-facing technical leadership
  • NVIDIA driver installation and troubleshooting (CUDA, Fabric Manager, DCGM)
  • Container runtime configuration for GPUs (Docker, Containerd, Enroot)
  • Linux kernel tuning and performance optimization
Job Responsibility
Job Responsibility
  • Partner with clients to understand workload requirements and design optimal GPU cluster architectures
  • Create technical proposals and capacity planning for clusters ranging from 100 to 10,000+ GPUs
  • Develop deployment strategies for LLM training, inference, and HPC workloads
  • Present architectural recommendations to technical and executive stakeholders
  • Deploy and configure orchestration systems including SLURM and Kubernetes for distributed workloads
  • Implement high-performance networking with InfiniBand, RoCE, and NVLink interconnects
  • Optimize GPU utilization, memory management, and inter-node communication
  • Configure parallel filesystems (Lustre, BeeGFS, GPFS) for optimal I/O performance
  • Tune system performance from kernel parameters to CUDA configurations
  • Serve as primary technical escalation point for customer infrastructure issues
  • Fulltime
Read More
Arrow Right

Member of Technical Staff - GPU Performance Engineer

Our models and workflows require performance work that generic frameworks don’t ...
Location
Location
United States , San Francisco; Boston
Salary
Salary:
Not provided
liquid.ai Logo
Liquid AI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Authored custom CUDA kernels (not only calling cuDNN/cuBLAS)
  • Strong understanding of GPU architecture and performance: memory hierarchy, warps, shared memory/register pressure, bandwidth vs compute limits
  • Proficiency with low-level profiling (Nsight Systems/Compute) and performance methodology
  • Strong C/C++ skills
Job Responsibility
Job Responsibility
  • Write high-performance GPU kernels for our novel model architectures
  • Integrate kernels into PyTorch pipelines (custom ops, extensions, dispatch, benchmarking)
  • Profile and optimize training and inference workflows to eliminate bottlenecks
  • Build correctness tests and numerics checks
  • Build/maintain performance benchmarks and guardrails to prevent regressions
  • Collaborate closely with researchers to turn promising ideas into shipped speedups
What we offer
What we offer
  • Competitive base salary with equity in a unicorn-stage company
  • We pay 100% of medical, dental, and vision premiums for employees and dependents
  • 401(k) matching up to 4% of base pay
  • Unlimited PTO plus company-wide Refill Days throughout the year
  • Fulltime
Read More
Arrow Right

Member of Technical Staff, Pre-Training Infrastructure

Microsoft AI is looking for a Member of Technical Staff, Pre-Training Infrastruc...
Location
Location
United States , Mountain View
Salary
Salary:
139900.00 - 274800.00 USD / Year
https://www.microsoft.com/ Logo
Microsoft Corporation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
  • OR equivalent experience
  • Experience in distributed computing and large-scale systems
  • Experience with GPU programming (CUDA, NCCL) and frameworks such as PyTorch
  • Proven ability to profile, benchmark, and optimize performance-critical systems
  • Experience in leading technical projects and supporting architectural decisions with data
  • Experience building infrastructure for large-scale machine learning or generative AI workloads
  • Experience in networking (InfiniBand, NVLink), storage systems, or distributed training parallelisms
  • Track record of contributing to high-performance computing or large-scale AI infrastructure projects
Job Responsibility
Job Responsibility
  • Design, implement, test, and optimize distributed training infrastructure in Python and C++ for large-scale GPU clusters
  • Profile, benchmark, and debug performance bottlenecks across compute, memory, networking, and storage subsystems
  • Optimize collective communication libraries (e.g., NCCL) for emerging NVLink and InfiniBand topologies
  • Collaborate with hardware teams to optimize for next-generation accelerators (NVIDIA, AMD, and beyond)
  • Gather data and insights to develop the pretraining compute roadmap
  • Care deeply about conversational AI and its deployment
  • Actively contribute to the development of AI models powering our innovative products
  • Find solutions to overcome roadblocks and deliver your work to users quickly and iteratively
  • Enjoy working in a fast-paced, design-driven product development cycle
  • Embody our Culture and Values
  • Fulltime
Read More
Arrow Right

Member of Technical Staff, Site Reliability Engineer (HPC)

As Microsoft continues to push the boundaries of AI, we are on the lookout for p...
Location
Location
United States , Mountain View
Salary
Salary:
139900.00 - 274800.00 USD / Year
https://www.microsoft.com/ Logo
Microsoft Corporation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master's Degree in Computer Science, Information Technology, or related field AND 2+ years technical experience in Site Reliability Engineering, DevOps, or Infrastructure Engineering
  • OR Bachelor's Degree in Computer Science, Information Technology, or related field AND 4+ years technical experience in Site Reliability Engineering, DevOps, or Infrastructure Engineering
  • OR equivalent experience
  • Strong proficiency in Kubernetes, Docker, and container orchestration
  • Knowledge of CI/CD pipelines for Inference and ML model deployment
  • Hands-on experience with public cloud platforms like Azure/AWS/GCP and infrastructure-as-code
  • Expertise in monitoring & observability tools (Grafana, Datadog, OpenTelemetry, etc.)
  • Strong programming/scripting skills in Python, Go, or Bash
  • Solid knowledge of distributed systems, networking, and storage
  • Experience running large-scale GPU clusters for ML/AI workloads (preferred)
Job Responsibility
Job Responsibility
  • Reliability & Availability: Ensure uptime, resiliency, and fault tolerance of HPC clusters powering MAI model training and inference
  • Observability: Design and maintain monitoring, alerting, and logging systems to provide real-time visibility into all aspects of HPC systems including GPU, clusters, storage and networking
  • Automation & Tooling: Build automation for deployments, incident response, scaling, and failover in CPU+GPU environments
  • Incident Management: Lead on-call rotations, troubleshoot production issues, conduct blameless postmortems, and drive continuous improvements
  • Security & Compliance: Ensure data privacy, compliance, and secure operations across model training and serving environments
  • Collaboration: Partner with ML engineers and platform teams to improve developer experience and accelerate research-to-production workflows
What we offer
What we offer
  • Competitive compensation, equity options, and comprehensive benefits
  • Fulltime
Read More
Arrow Right

Member of Technical Staff, LLM Inference - MAI Superintelligence Team

Our Inference team is responsible for building and maintaining the tools and sys...
Location
Location
United States , Mountain View
Salary
Salary:
139900.00 - 274800.00 USD / Year
https://www.microsoft.com/ Logo
Microsoft Corporation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
  • OR equivalent experience
  • Experience with generative AI
  • Experience with distributed computing
  • Python and Python ecosystem (eg. uv, pybind/nanobind, FastAPI) expertise
  • Experience with large scale production inference
  • Experience with GPU kernel programming
  • Experience benchmarking, profiling, and optimizing PyTorch generative AI models
  • Experience with open source inference frameworks like vLLM and SGLang
  • Working experience and conversant with the material in the JAX scaling book
Job Responsibility
Job Responsibility
  • Work alongside researchers and engineers to implement frontier AI research ideas
  • Introduce new systems, tools, and techniques to improve model inference performance
  • Build tools to help debug performance bottlenecks, numeric instabilities, and distributed systems issues
  • Build tools and establish processes to enhance the team’s collective productivity
  • Find ways to overcome roadblocks and deliver your work to users quickly and iteratively
  • Enjoy working in a fast-paced, design-driven product development cycle
  • Embody our Culture and Values
  • Fulltime
Read More
Arrow Right
New

Member of Technical Staff - Reinforcement Learning (Infrastructure), AGI Autonomy

The Amazon AGI SF Lab is focused on developing new foundational capabilities for...
Location
Location
United States , San Francisco
Salary
Salary:
255000.00 - 345000.00 USD / Year
Amazon
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • PhD, or Master's degree and 3+ years of applied research experience
  • Experience with programming languages such as Python, Java, C++
  • Experience with neural deep learning methods and machine learning
  • Experience with training and deploying machine learning systems to solve large-scale optimizations, or experience troubleshooting and debugging technical systems
Job Responsibility
Job Responsibility
  • Develop training infrastructure to ensure large-scale reinforcement learning on LLMs runs highly efficient and robust
  • Work across the entire technology stack, including low level ML system, job orchestration and data management
  • Analyze, troubleshoot and profiling complex ML systems, identify and address performance bottlenecks
  • Work closely with researchers, conduct MLSys research to create new techniques, infrastructure, and tooling around emerging research capabilities
What we offer
What we offer
  • equity
  • sign-on payments
  • full range of medical, financial, and/or other benefits
  • Fulltime
Read More
Arrow Right

Member of Technical Staff - Edge Inference Engineer

Our Edge Inference team compiles Liquid Foundation Models into optimized machine...
Location
Location
United States , San Francisco; Boston
Salary
Salary:
Not provided
liquid.ai Logo
Liquid AI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in systems programming with strong C++ proficiency
  • Embedded software engineering experience or work on resource-constrained systems
  • Understanding of ML fundamentals at the linear algebra level (how matrix operations, attention, and quantization work)
  • Experience with hardware architecture concepts: cache hierarchies, memory bandwidth, SIMD/vectorization
Job Responsibility
Job Responsibility
  • Implement and optimize inference kernels for CPU, NPU, and GPU architectures across diverse edge hardware
  • Develop quantization strategies (INT4, INT8, FP8) that maximize compression while preserving model quality under strict memory budgets
  • Contribute to llama.cpp and other open-source inference frameworks, including new model architectures (audio, vision)
  • Profile and optimize end-to-end inference pipelines to achieve sub-100ms time-to-first-token on target devices
  • Collaborate with ML researchers to understand model architectures and identify optimization opportunities specific to Liquid Foundation Models
What we offer
What we offer
  • Competitive base salary with equity in a unicorn-stage company
  • 100% of medical, dental, and vision premiums for employees and dependents
  • 401(k) matching up to 4% of base pay
  • Unlimited PTO plus company-wide Refill Days throughout the year
  • Fulltime
Read More
Arrow Right