CrawlJobs Logo

Member of Technical Staff, Inference

runwayml.com Logo

Runway

Location Icon

Location:
United States

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

240000.00 - 290000.00 USD / Year

Job Description:

We're looking for an ML infrastructure engineer to bridge the gap between research and production at Runway. You'll work directly with our research teams to productionize cutting-edge generative models—taking checkpoints from training to staging to production, ensuring reliability at scale, and building the infrastructure that enables fast iteration. You'll be embedded within research teams, providing platform support throughout the entire model development lifecycle. Your work will directly impact how quickly we can ship new models and features to millions of users.

Job Responsibility:

  • Productionize model checkpoints end-to-end: from research completion to internal testing to production deployment to post-release support
  • Build and optimize inference systems for large-scale generative models running on multi-GPU environments
  • Design and implement model serving infrastructure specialized for diffusion models and real-time diffusion workflows
  • Add monitoring and observability for new model releases—track errors, throughput, GPU utilization, and latency
  • Embed with research teams to gather training data, run preprocessing scripts, and support the model development process
  • Explore and integrate with GPU inference providers (Modal, E2E, Baseten, etc.)

Requirements:

  • 4+ years of experience running ML model inference at scale in production environments
  • Strong experience with PyTorch and multi-GPU inference for large models
  • Experience with Kubernetes for ML workloads—deploying, scaling, and debugging GPU-based services
  • Comfortable working across multiple cloud providers and managing GPU driver compatibility
  • Experience with monitoring and observability for ML systems (errors, throughput, GPU utilization)
  • Self-starter who can work embedded with research teams and move fast
  • Strong systems thinking and pragmatic approach to production reliability
  • Humility and open mindedness

Nice to have:

  • Experience building custom inference frameworks or serving systems
  • Deep understanding of distributed training and inference patterns (FSDP, data parallelism, tensor parallelism)
  • Ability to debug low-level issues: NCCL networking problems, CUDA errors, memory leaks, performance bottlenecks
  • Experience with diffusion models or video generation systems
  • Knowledge of real-time or latency-sensitive ML applications

Additional Information:

Job Posted:
January 20, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Member of Technical Staff, Inference

Member of Technical Staff, Cloud Infrastructure

As a Software Engineer on our Cloud Infrastructure team, you'll be at the forefr...
Location
Location
United States , New York, NY; San Mateo, CA; Redwood City, CA
Salary
Salary:
175000.00 - 220000.00 USD / Year
fireworks.ai Logo
Fireworks AI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Engineering, or a related technical field (or equivalent practical experience)
  • 5+ years of experience designing and building backend infrastructure in cloud environments (e.g., AWS, GCP, Azure)
  • Proven experience in ML infrastructure and tooling (e.g., PyTorch, TensorFlow, Vertex AI, SageMaker, Kubernetes, etc.)
  • Strong software development skills in languages like Python, or C++
  • Deep understanding of distributed systems fundamentals: scheduling, orchestration, storage, networking, and compute optimization
Job Responsibility
Job Responsibility
  • Architect and build scalable, resilient, and high-performance backend infrastructure to support distributed training, inference, and data processing pipelines
  • Lead technical design discussions, mentor other engineers, and establish best practices for building and operating large-scale ML infrastructure
  • Design and implement core backend services (e.g., job schedulers, resource managers, autoscalers, model serving layers) with a focus on efficiency and low latency
  • Drive infrastructure optimization initiatives, including compute cost reduction, storage lifecycle management, and network performance tuning
  • Collaborate cross-functionally with ML, DevOps, and product teams to translate research and product needs into robust infrastructure solutions
  • Continuously evaluate and integrate cloud-native and open-source technologies (e.g., Kubernetes, Ray, Kubeflow, MLFlow) to enhance our platform’s capabilities and reliability
  • Own end-to-end systems from design to deployment and observability, with a strong emphasis on reliability, fault tolerance, and operational excellence
What we offer
What we offer
  • Meaningful equity in a fast-growing startup
  • Competitive salary
  • Comprehensive benefits package
  • Fulltime
Read More
Arrow Right

Member of Technical Staff - Platform Engineer

Platform Engineer to join our team building backend infrastructure for new ML-po...
Location
Location
United States , Palo Alto
Salary
Salary:
175000.00 - 350000.00 USD / Year
inflection.ai Logo
Inflection AI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Backend engineering experience with Python, TypeScript, or Node.js
  • Hands-on experience working with production PyTorch models, model checkpoints, and inference logic
  • Strong knowledge of building APIs and services that are scalable, stable, and secure
  • Passion for bridging backend engineering and ML systems, especially at the infrastructure layer
  • Familiarity with tools such as FastAPI, Postgres, Redis, Kubernetes, and React
  • Desire to be hands-on and contribute to shaping the foundation of a new enterprise ML product
  • Have a bachelor’s degree or equivalent in a related field to the offered position requirements
Job Responsibility
Job Responsibility
  • Build and maintain backend services to support LLM integration, inference orchestration, and data flow
  • Write clean, reliable Python code for experimentation, model integration, and production systems
  • Collaborate closely with ML researchers to rapidly iterate on product ideas and deploy features
  • Design and implement infrastructure to handle scalable inference workloads and enterprise-level use cases
  • Own system components and ensure reliability, observability, and maintainability from day one
What we offer
What we offer
  • Diverse medical, dental and vision options
  • 401k matching program
  • Unlimited paid time off
  • Parental leave and flexibility for all parents and caregivers
  • Support of country-specific visa needs for international employees living in the Bay Area
  • Competitive stock options
Read More
Arrow Right

Member of Technical Staff – Model Training

At Inflection AI, our public benefit mission is to harness the power of AI to im...
Location
Location
United States , Palo Alto
Salary
Salary:
175000.00 - 350000.00 USD / Year
inflection.ai Logo
Inflection AI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Have hands-on experience training and fine-tuning large transformer models on multi-GPU / multi-node clusters
  • Are fluent in PyTorch and its ecosystem tools (Torchtune, FSDP, DeepSpeed) and enjoy digging into distributed-training internals, mixed precision, and memory-efficiency tricks
  • Have shipped or published work in RLHF, DPO, GRPO, or RLAIF and understand their practical trade-offs
  • Care deeply about training tools, pipelines, and reproducibility—you automate the boring parts so you can iterate on the fun parts
  • Balance research curiosity with product pragmatism—you know when to run an ablation and when to ship
  • Communicate crisply with both technical and non-technical teammates
  • Have a bachelor’s degree or equivalent in a related field to the offered position requirements
Job Responsibility
Job Responsibility
  • Contribute to end-to-end post-training workflows—dataset curation, hyper-parameter search, evaluation, and rollout—using PyTorch, Torchtune, FSDP/DeepSpeed, and our internal orchestration stack
  • Prototype and compare alignment techniques (e.g., curriculum RL, multi-objective reward modeling, tool-use fine-tuning) and push the best ideas into production
  • Automate training at scale: build robust pipeline components, tools, scripts, and dashboards so experiments are reproducible and easy to trace
  • Define the metrics that matter
  • run A/B tests and iterate quickly to meet aggressive quality targets
  • Collaborate with inference, safety, and product teams to land improvements in customer-facing systems
What we offer
What we offer
  • Diverse medical, dental and vision options
  • 401k matching program
  • Unlimited paid time off
  • Parental leave and flexibility for all parents and caregivers
  • Support of country-specific visa needs for international employees living in the Bay Area
  • Competitive stock options
Read More
Arrow Right

Member of Technical Staff – Backend

As a backend engineer at Inflection, you will own the platforms, systems, and se...
Location
Location
United States , Palo Alto
Salary
Salary:
175000.00 - 350000.00 USD / Year
inflection.ai Logo
Inflection AI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience building and scaling backend systems for high-throughput applications
  • Fluent in building distributed systems with Python, Go, Rust, or similar languages
  • Comfortable with cloud-native architectures (e.g., Kubernetes, gRPC, Postgres, Redis, Kafka)
  • Owned backend services end-to-end—from design and implementation to deployment, monitoring, and debugging
  • Thrive in fast-paced environments where you can move quickly without sacrificing engineering rigor
  • Proactively improve tooling and infrastructure to support teammates’ workflows and reliability goals
  • Communicate clearly across disciplines and take pride in solving user-facing problems with clean backend solutions
  • Have a bachelor’s degree or equivalent in a related field to the offered position requirements
Job Responsibility
Job Responsibility
  • Design and implement scalable backend systems and APIs that power production LLM experiences, including agentic workflows, memory systems, and tool integrations
  • Build and operate high-availability infrastructure to support real-time inference, retrieval, and conversation pipelines
  • Develop internal platforms to improve engineering productivity—CI/CD pipelines, service templates, observability frameworks, and rollout tooling
  • Collaborate closely with applied research and frontend teams to rapidly prototype, ship, and iterate on end-user features
  • Ensure systems meet our high bar for security, uptime, and latency—through incident response, load testing, monitoring, and automation
  • Participate in on-call rotations to maintain the reliability of the services you build
What we offer
What we offer
  • Diverse medical, dental and vision options
  • 401k matching program
  • Unlimited paid time off
  • Parental leave and flexibility for all parents and caregivers
  • Support of country-specific visa needs for international employees living in the Bay Area
  • Competitive stock options
Read More
Arrow Right

Member of Technical Staff, Performance Optimization

We're looking for a Software Engineer focused on Performance Optimization to hel...
Location
Location
United States , San Mateo
Salary
Salary:
175000.00 - 220000.00 USD / Year
fireworks.ai Logo
Fireworks AI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent practical experience
  • 5+ years of experience working on performance optimization or high-performance computing systems
  • Proficiency in CUDA or ROCm and experience with GPU profiling tools (e.g., Nsight, nvprof, CUPTI)
  • Familiarity with PyTorch and performance-critical model execution
  • Experience with distributed system debugging and optimization in multi-GPU environments
  • Deep understanding of GPU architecture, parallel programming models, and compute kernels
Job Responsibility
Job Responsibility
  • Optimize system and GPU performance for high-throughput AI workloads across training and inference
  • Analyze and improve latency, throughput, memory usage, and compute efficiency
  • Profile system performance to detect and resolve GPU- and kernel-level bottlenecks
  • Implement low-level optimizations using CUDA, Triton, and other performance tooling
  • Drive improvements in execution speed and resource utilization for large-scale model workloads (LLMs, VLMs, and video models)
  • Collaborate with ML researchers to co-design and tune model architectures for hardware efficiency
  • Improve support for mixed precision, quantization, and model graph optimization
  • Build and maintain performance benchmarking and monitoring infrastructure
  • Scale inference and training systems across multi-GPU, multi-node environments
  • Evaluate and integrate optimizations for emerging hardware accelerators and specialized runtimes
What we offer
What we offer
  • Meaningful equity in a fast-growing startup
  • Competitive salary
  • Comprehensive benefits package
  • Fulltime
Read More
Arrow Right

Member of Technical Staff – Fullstack Engineer

As a fullstack engineer at Inflection, you will own the platforms, systems, and ...
Location
Location
United States , Palo Alto
Salary
Salary:
175000.00 - 350000.00 USD / Year
inflection.ai Logo
Inflection AI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional software engineering experience, particularly in full-stack development
  • Prior experience in high-growth or early-stage startup environments
  • Strong proficiency across the modern web stack: Python, TypeScript, Node.js, and modern frontend frameworks (e.g., React, Tailwind)
  • Experience in designing complex architectures, including asynchronous workflows and integrations
  • Proven problem-solving, collaboration, and communication skills
  • Experience building or integrating AI/LLM-powered applications
  • Experience with modern cloud and workflow infrastructure, including orchestration frameworks (e.g., Temporal), containerization and Kubernetes, and CI/CD pipelines on AWS/GCP/Azure
  • Have a bachelor’s degree or equivalent in a related field to the offered position requirements
Job Responsibility
Job Responsibility
  • Design and implement scalable backend systems and APIs that power production LLM experiences, including agentic workflows, memory systems, and tool integrations
  • Build and operate high-availability infrastructure to support real-time inference, retrieval, and conversation pipelines
  • Develop internal platforms to improve engineering productivity—CI/CD pipelines, service templates, observability frameworks, and rollout tooling
  • Collaborate closely with applied research and frontend teams to rapidly prototype, ship, and iterate on end-user features
  • Ensure systems meet our high bar for security, uptime, and latency through incident response, load testing, monitoring, and automation
  • Participate in on-call rotations to maintain the reliability of the services you build
What we offer
What we offer
  • Diverse medical, dental and vision options
  • 401k matching program
  • Unlimited paid time off
  • Parental leave and flexibility for all parents and caregivers
  • Support of country-specific visa needs for international employees living in the Bay Area
  • Meaningful equity component
  • Fulltime
Read More
Arrow Right

Senior Staff Machine Learning Engineer

Help design our AI platform and develop our next generation of machine learning ...
Location
Location
United States , San Francisco
Salary
Salary:
216500.00 - 324500.00 USD / Year
gofundme.com Logo
GoFundMe
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 9+ years of hands-on experience in machine learning engineering, AI development, software engineering, or related fields
  • Experience emphasizing secure, large-scale, distributed system design, AI/ML pipeline development, and implementation
  • Extensive experience designing, developing, and operating scalable backend systems
  • Experience applying software engineering best practices such as domain-driven design, event-driven architectures, and microservices
  • Deep expertise in agentic workflows, AI evaluation solutions, prompt management, and secure AI development and testing practices
  • Strong knowledge of relational and document-based databases, data storage paradigms, and efficient RESTful API design
  • Experience establishing robust CI/CD pipelines, automated testing (unit and integration), and deployment practices
  • Strong leadership skills, including effective planning and management of complex projects, mentoring of team members, and fostering a collaborative, high-performing engineering culture
  • Excellent communicator, able to articulate complex technical concepts clearly to both technical and non-technical stakeholders
  • Bachelor's degree in Computer Science, Software Engineering, or a related technical field (preferred)
Job Responsibility
Job Responsibility
  • Design and implement AI platforms to enable scalable and secure access to LLMs from multiple model providers for diverse use cases
  • Design and implement agentic workflows, agentic tool ecosystems, and LLM prompt management solutions
  • Design, build, and optimize scalable model training, fine tuning, and inference pipelines, ensuring robust integration with production systems
  • Influence technical strategy and approach to developing embedding stores, vector databases, and other reusable assets
  • Lead initiatives to streamline ML and AI workflows, improve operational efficiency, and establish standardized procedures to achieve consistent, high-quality results across our AI systems
  • Design and develop backend services and RESTful APIs using Python and FastAPI, integrating seamlessly with ML pipelines and services
  • Take operational responsibility for team-owned services, including performance monitoring, optimization, troubleshooting, and participation in an on-call rotation
  • Collaborate with both technical and non-technical colleagues, including data and applied scientists, software engineers, product managers, and business stakeholders, to deliver reliable and scalable ML-driven products
  • Coach and mentor fellow ML engineers, promoting a culture of collaboration, continuous improvement, and engineering excellence within the team
  • Employ a diverse set of tools and platforms including Python, AWS, Databricks, Docker, Kubernetes, FastAPI, Terraform, Snowflake, Coralogix, and GitHub to build, deploy, and maintain scalable, highly available machine learning infrastructure
What we offer
What we offer
  • Competitive pay
  • Comprehensive healthcare benefits
  • Financial assistance for things like hybrid work, family planning
  • Generous parental leave
  • Flexible time-off policies
  • Mental health and wellness resources
  • Learning, development, and recognition programs
  • Fulltime
Read More
Arrow Right

Head of Data

As Head of Data, you will own our end-to-end data function: analytics, data engi...
Location
Location
United States
Salary
Salary:
239300.00 - 280000.00 USD / Year
findoctave.com Logo
Octave
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of experience in data analytics/data science
  • 4+ years in a leadership role scaling teams and delivering cross-functional impact
  • Expertise with analytics reporting and data platforms Tableau, SQL, Adverity
  • Fluency in SQL and Python
  • experience with modern data stacks (e.g. BigQuery, dbt, Metabase, Mixpanel)
  • Proven leadership experience overseeing multi-disciplinary data teams (engineering, analytics, science)
  • Track record of hiring and scaling teams at high-growth startups or scale-ups
  • Experience shipping ML/AI solutions into production with measurable business impact
  • Strong technical background: modern data stacks (dbt, Snowflake, Airflow), programming (Python, SQL), ML/AI frameworks
  • Strategic thinker who can roll up sleeves when needed
Job Responsibility
Job Responsibility
  • Define and drive the company-wide data & AI/ML vision, aligned with business and product strategy
  • Partner with executives across Product, Engineering, Growth, and Finance to ensure data informs key decisions and creates competitive advantage
  • Evangelize data culture — making data and AI central to how we operate and build
  • Oversee pipelines, warehouse, and infrastructure for reliability, observability, and scale
  • Establish a single source of truth for KPIs and reporting
  • Implement governance for data quality, security, and compliance
  • Lead analytics team to deliver insights that drive product, growth, and operations
  • Scale self-serve analytics adoption while maintaining consistency and quality
  • Build frameworks for experimentation (A/B testing, causal inference)
  • Launch and scale high-impact AI/ML initiatives, from pilots to production
What we offer
What we offer
  • company sponsored life insurance
  • disability and AD&D plans
  • Voluntary benefits such as 401k retirement, medical, dental, vision, FSA, HSA, dependent care and commuter/parking options
  • Octave offers generous Paid Time Off as well as paid parental leave benefits
  • equity in the form of stock options
  • an annual bonus
  • Fulltime
Read More
Arrow Right