CrawlJobs Logo

Senior ML Inference Engineer - Platform

gm.com Logo

General Motors

Location Icon

Location:
United States , Austin

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

128700.00 - 261300.00 USD / Year

Job Description:

The Model Deployment & Inference Solutions team in GM AV deploys machine learning models from training frameworks (e.g. PyTorch) onto autonomous vehicle hardware. Our mission is two-fold: build the ML deployment platform that makes model rollouts fast and predictable, and optimize models so they meet the real-time latency and memory budgets required to run on-vehicle. Our work is on the critical path of GM's publicly committed launch of eyes-off (hands-free, eyes-free) autonomous driving in 2028, debuting on the Cadillac Escalade IQ, building on Super Cruise's billion-plus hands-free miles.

Job Responsibility:

  • Design, build, and operate the ML deployment platform that automates the path from trained model to on-vehicle inference
  • Drive cross-organization model deployments to the autonomous vehicle stack, partnering with model development teams to take high-value models from training to production on-vehicle
  • Build agentic tools that diagnose and fix deployment-blocking issues, automating workflows currently performed manually by engineers
  • Build the developer experience that ML model development teams use day to day: tooling, dashboards, automation, and observability
  • Drive shift-left validation that surfaces deployment risk (compile, runtime, parity, latency) early in the model development cycle
  • Build platform tools that integrate the work of our sister teams (kernels, compiler, reduced precision and parity) so their optimization wins land directly in the deployment workflow
  • Partner with the team's Performance pillar and model development teams across the AV organization

Requirements:

  • BS, MS, or PhD in Computer Science or a related technical field
  • 3+ years of relevant industry experience
  • Strong fundamentals and excellent coding ability in Python
  • Experience building or operating production platform or infrastructure systems where reliability, observability, and extensibility matter
  • Experience with ML model deployment, inference integration, model optimization workflows, or model serving infrastructure, with at least one prior context where you owned the path from a trained model to a running inference workload
  • Experience using coding agents (Cursor, Claude Code, GitHub Copilot, or equivalent) as part of your engineering workflow
  • Experience designing clean, well-tested software with clear interfaces and good abstractions
  • Strong cross-team collaboration skills

Nice to have:

  • Experience building agentic or LLM-powered developer tooling
  • Experience with ML or workflow orchestration frameworks (Airflow, Temporal, Flyte, Ray, Kubeflow, or equivalent)
  • Familiarity with the NVIDIA GPU stack at the integration level (CUDA-aware Python, TensorRT, Triton inference server, torch.compile, ONNX)
  • Experience with inference-serving frameworks (Triton, TorchServe, Ray Serve, vLLM) or edge-deployment toolchains
  • Experience with low-latency or real-time systems
  • Experience in autonomous vehicles, robotics, or other safety-critical ML deployment domains
  • Open-source contributions to PyTorch, Ray, Airflow, Temporal, vLLM, TensorRT, or related projects
  • 3+ years of relevant industry experience
What we offer:
  • Medical
  • Dental
  • Vision
  • Health Savings Account
  • Flexible Spending Accounts
  • Retirement savings plan
  • Sickness and accident benefits
  • Life insurance
  • Paid vacation & holidays
  • Tuition assistance programs
  • Employee assistance program
  • GM vehicle discounts

Additional Information:

Job Posted:
May 14, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior ML Inference Engineer - Platform

Senior Software Engineer - ML Infrastructure

We build simple yet innovative consumer products and developer APIs that shape h...
Location
Location
United States , San Francisco
Salary
Salary:
180000.00 - 270000.00 USD / Year
plaid.com Logo
Plaid
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of industry experience as a software engineer, with strong focus on ML/AI infrastructure or large-scale distributed systems
  • Hands-on expertise in building and operating ML platforms (e.g., feature stores, data pipelines, training/inference frameworks)
  • Proven experience delivering reliable and scalable infrastructure in production
  • Solid understanding of ML Ops concepts and tooling, as well as best practices for observability, security, and reliability
  • Strong communication skills and ability to collaborate across teams
Job Responsibility
Job Responsibility
  • Design and implement large-scale ML infrastructure, including feature stores, pipelines, deployment tooling, and inference systems
  • Drive the rollout of Plaid’s next-generation feature store to improve reliability and velocity of model development
  • Help define and evangelize an ML Ops “golden path” for secure, scalable model training, deployment, and monitoring
  • Ensure operational excellence of ML pipelines and services, including reliability, scalability, performance, and cost efficiency
  • Collaborate with ML product teams to understand requirements and deliver solutions that accelerate experimentation and iteration
  • Contribute to technical strategy and architecture discussions within the team
  • Mentor and support other engineers through code reviews, design discussions, and technical guidance
What we offer
What we offer
  • medical, dental, vision, and 401(k)
  • Fulltime
Read More
Arrow Right

Senior ML Platform Engineer

At WHOOP, we're on a mission to unlock human performance and healthspan. WHOOP e...
Location
Location
United States , Boston
Salary
Salary:
150000.00 - 210000.00 USD / Year
whoop.com Logo
Whoop
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s Degree in Computer Science, Engineering, or a related field
  • or equivalent practical experience
  • 5+ years of experience in software engineering with a focus on ML infrastructure, cloud platforms, or MLOps
  • Strong programming skills in Python, with experience in building distributed systems and REST/gRPC APIs
  • Deep knowledge of cloud-native services and infrastructure-as-code (e.g., AWS CDK, Terraform, CloudFormation)
  • Hands-on experience with model deployment platforms such as AWS SageMaker, Vertex AI, or Kubernetes-based serving stacks
  • Proficiency in ML lifecycle tools (MLflow, Weights & Biases, BentoML) and containerization strategies (Docker, Kubernetes)
  • Understanding of data engineering and ingestion pipelines, with ability to interface with data lakes, feature stores, and streaming systems
  • Proven ability to work cross-functionally with Data Science, Data Platform, and Software Engineering teams, influencing decisions and driving alignment
  • Passion for AI and automation to solve real-world problems and improve operational workflows
Job Responsibility
Job Responsibility
  • Architect, build, own, and operate scalable ML infrastructure in cloud environments (e.g., AWS), optimizing for speed, observability, cost, and reproducibility
  • Create, support, and maintain core MLOps infrastructure (e.g., MLflow, feature store, experiment tracking, model registry), ensuring reliability, scalability, and long-term sustainability
  • Develop, evolve, and operate MLOps platforms and frameworks that standardize model deployment, versioning, drift detection, and lifecycle management at scale
  • Implement and continuously maintain end-to-end CI/CD pipelines for ML models using orchestration tools (e.g., Prefect, Airflow, Argo Workflows), ensuring robust testing, reproducibility, and traceability
  • Partner closely with Data Science, Sensor Intelligence, and Data Platform teams to operationalize and support model development, deployment, and monitoring workflows
  • Build, manage, and maintain both real-time and batch inference infrastructure, supporting diverse use cases from physiological analytics to personalized feedback loops for WHOOP members
  • Design, implement, and own automated observability tooling (e.g., for model latency, data drift, accuracy degradation), integrating metrics, logging, and alerting with existing platforms
  • Leverage AI-powered tools and automation to reduce operational overhead, enhance developer productivity, and accelerate model release cycles
  • Contribute to and maintain internal platform documentation, SDKs, and training materials, enabling self-service capabilities for model deployment and experimentation
  • Continuously evaluate and integrate emerging technologies and deployment strategies, influencing WHOOP’s roadmap for AI-driven platform efficiency, reliability, and scale
What we offer
What we offer
  • equity
  • benefits
  • Fulltime
Read More
Arrow Right

Senior Software Engineer - Network Enablement (Applied ML)

We build simple yet innovative consumer products and developer APIs that shape h...
Location
Location
United States , San Francisco
Salary
Salary:
180000.00 - 270000.00 USD / Year
plaid.com Logo
Plaid
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong software engineering skills including systems design, APIs, and building reliable backend services (Go or Python preferred)
  • Production experience with batch and streaming data pipelines and orchestration tools such as Airflow or Spark
  • Experience building or operating real-time scoring and online feature-serving systems, including feature stores and low-latency model inference
  • Experience integrating model outputs into product flows (APIs, feature flags) and measuring impact through experiments and product metrics
  • Experience with model lifecycle and operations: model registries, CI/CD for models, reproducible training, offline & online parity, monitoring and incident response
Job Responsibility
Job Responsibility
  • Embed model inference into Network Enablement product flows and decision logic (APIs, feature flags, backend flows)
  • Define and instrument product + ML success metrics (fraud reduction, retention lift, false positives, downstream impact)
  • Design and run experiments and rollout plans (backtesting, shadow scoring, A/B tests, feature-flagged releases) to validate product hypotheses
  • Build and operate offline training pipelines and production batch scoring for bank intelligence products
  • Ship and maintain online feature serving and low-latency model inference endpoints for real-time partner/bank scoring
  • Implement model CI/CD, model/version registry, and safe rollout/rollback strategies
  • Monitor model/data health: drift/regression detection, model-quality dashboards, alerts, and SLOs targeted to partner product needs
  • Ensure offline and online parity, data lineage, and automated validation / data contracts to reduce regressions
  • Optimize inference performance and cost for real-time scoring (batching, caching, runtime selection)
  • Ensure fairness, explainability and PII-aware handling for partner-facing ML features
What we offer
What we offer
  • medical
  • dental
  • vision
  • 401(k)
  • equity
  • commission
  • Fulltime
Read More
Arrow Right

Senior Principal Technical Program Manager - ML Platform

Location
Location
Salary
Salary:
231300.00 - 301975.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of experience on software teams as Development Manager, Technical Product Manager or TPM leading technical platforms areas
  • Deep domain experience in AI and/or Search. Example: Model Inference, Model Evaluation, Model Training, LLM Ops, Semantic Search, Search Relevance, etc.
  • Partner with Engineering in defining direction, strategy and execution at Platform level
  • Strategic thinking and ability to understand business objectives to translate them into technical problems and programs.
  • Technical understanding of systems involved. Willingness to develop domain expertise in the area they operate - storage, networking, authentication, capacity management, service deployments, etc.
  • TPMs are not expected to write or read code, but are expected to understand system flows, block architectures, APIs and such.
  • Experience defining and running end-to-end complex technical programs
  • Strong leadership, organizational, and communication skills
Job Responsibility
Job Responsibility
  • Understand and stay up-to-date on latest innovations in AI and Search. Partner closely with engineering teams to translate these into practical platform evolution for Atlassian bringing value to our customers.
  • Analyze business objectives, customer needs, product adoption inhibitors and opportunities, industry trends, and based on these, in close collaboration with your stakeholders, define a long-term strategy and roadmap for your platform and product components.
  • Understand business objectives and translate them into technical systems problems that need to be prioritized solved in the current business environment.
  • Define specific systems programs and create a plan of action for realizing those programs. Such programs could be around capacity planning, migration efforts, high availability, network architecture, performance optimization, reliability improvements and more.
  • Use your technical understanding of Atlassian and related systems to partner with and influence engineers and architects in making progress on these problems.
  • Responsible for taking a systematic approach to engineering problems. This includes: prioritizing tasks, scoping out the project, defining objectives, and making consistent progress against each of these.
  • Be accountable for the success of these technical programs by managing the entire lifecycle from initiation to forecasting, budgeting, scheduling, etc.
  • Manage complex dependencies and projects with a broad scope across the company
What we offer
What we offer
  • health and wellbeing resources
  • paid volunteer days
Read More
Arrow Right

Senior Software Engineer (TypeScript) - AI/ML

We are looking for a Senior Software Engineer to drive the development of AI/ML-...
Location
Location
The Netherlands
Salary
Salary:
Not provided
clickhouse.com Logo
ClickHouse
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of software engineering experience in production environments
  • Exposure to working directly with AI/ML technologies
  • Strong frontend skills with TypeScript/JavaScript and React
  • Backend development experience in TypeScript or Python, with a focus on API design and service architecture
  • You have a high level of ownership and can drive features from concept to production with minimal supervision
  • You thrive in collaborative environments and can effectively communicate technical concepts to diverse stakeholders
Job Responsibility
Job Responsibility
  • Feature Development: Design and implement AI-powered features across the full stack, from backend inference services to intuitive frontend interfaces within the ClickHouse Cloud platform
  • API Architecture: Create robust, scalable APIs that connect ClickHouse's database capabilities with modern AI/ML inference systems and external/internal AI services
  • UI/UX Implementation: Build responsive, intuitive user interfaces that make complex AI functionalities accessible and valuable to users of all technical backgrounds
  • Ecosystem Integrations: Implement and maintain integrations with the broader AI/ML ecosystem and standards, ensuring that ClickHouse as a technology works seamlessly with popular frameworks and tools
  • Technical Integration: Integrate models into production systems with proper monitoring, versioning, observability, and evaluation
What we offer
What we offer
  • Flexible work environment - ClickHouse is a globally distributed company and remote-friendly. We currently operate in 20 countries
  • Healthcare - Employer contributions towards your healthcare
  • Equity in the company - Every new team member who joins our company receives stock options
  • Time off - Flexible time off in the US, generous entitlement in other countries
  • A $500 Home office setup if you’re a remote employee
  • Global Gatherings – We believe in the power of in-person connection and offer opportunities to engage with colleagues at company-wide offsites
Read More
Arrow Right

Senior Machine Learning Engineer (Health)

WHOOP is an advanced health and fitness wearable, on a mission to unlock human p...
Location
Location
United States , Boston
Salary
Salary:
150000.00 - 210000.00 USD / Year
whoop.com Logo
Whoop
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s Degree in Computer Science, Data Science, Applied Mathematics, or a related field. Master’s preferred
  • 5+ years of professional experience as a Machine Learning Engineer or Software Engineer with focus on ML systems
  • Proven expertise working with time series data (wearable, physiological, or high-frequency sensor data strongly preferred)
  • Experience designing and deploying ML inference systems at scale: both real-time streaming and large-scale batch pipelines
  • Strong coding skills in Python (scientific stack) and SQL, with a track record of writing clean, production-quality code
  • Strong communication skills to collaborate across engineering, research, and product teams
  • Proven experience deploying and maintaining ML systems on cloud platforms (AWS or GCP)
  • Working familiarity with MLOps best practices: model versioning, CI/CD for ML, observability, and monitoring for inference systems
  • Ability to reason about and design for performance trade-offs (latency vs. throughput vs. cost) when building ML inference systems
  • Strong understanding of backend service development (APIs and service reliability) as it applies to serving ML models at scale
Job Responsibility
Job Responsibility
  • Create, improve, and maintain production services that provide analysis for health features in collaboration with Data Scientists and MLOps Engineers
  • Collaborate with Data Engineers to improve ML data pipelines, tooling, and validation systems that support robust model performance
  • Work alongside data scientists to translate research prototypes into production ML systems optimized for scale, latency, and cost efficiency
  • Collaborate with researchers and product teams to align model development with health insights and member impact
  • Participate in on-call rotations for data science services, ensuring uptime and performance in production environments
What we offer
What we offer
  • equity
  • benefits
  • Fulltime
Read More
Arrow Right

Senior Software Engineer – AI

NStarX is seeking a highly skilled Senior Software Engineer – AI with a strong f...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
nstarxinc.com Logo
NStarX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or a related field (PhD is a plus)
  • 9+ years of experience in AI/ML engineering or related roles
  • 3+ years of experience in Generative AI with team leadership responsibilities
  • Proven track record of production-grade ML and GenAI model development and deployment
  • Programming: Python (preferred)
  • GenAI Frameworks: Hugging Face Transformers, Diffusers, LangChain, TGI
  • Serving & Inference: FastAPI, gRPC, NVIDIA Triton, TorchServe
  • Cloud Platforms: AWS (SageMaker, EKS), GCP (Vertex AI, GKE), Azure (Azure ML, AKS)
  • MLOps & DevOps: Kubeflow, MLflow, GitHub Actions, Jenkins, Helm, Terraform
  • Optimization Techniques: Model quantization, distillation, pipeline and tensor parallelism
Job Responsibility
Job Responsibility
  • Design, develop, and deploy machine learning models and AI algorithms to address complex business challenges
  • Lead and mentor a team of AI/ML engineers, ensuring quality and scalability in solution design and implementation
  • Collaborate closely with cross-functional teams including data scientists, software engineers, product managers, and UX designers
  • Lead the development and deployment of Generative AI applications across text, code, image, and audio modalities using state-of-the-art LLMs
  • Design and implement CI/CD pipelines for the GenAI model lifecycle including training, validation, packaging, and deployment
  • Apply best practices for model performance tuning, cost optimization, and scalable deployment in cloud and hybrid environments
  • Develop prompt engineering, fine-tuning strategies (LoRA, QLoRA, PEFT), and evaluation protocols tailored to business use cases
  • Stay current with emerging trends in AI, ML, and Generative AI and drive adoption across teams
  • Document processes, model architectures, and deployment strategies for traceability and knowledge sharing
  • Work closely with cross-functional teams to gather requirements and deliver high-quality solutions
What we offer
What we offer
  • Competitive salary aligned with market standards
  • Opportunities for professional development and skill enhancement
  • A collaborative and innovative work environment
  • Fulltime
Read More
Arrow Right

Senior Machine Learning Engineering Manager, Gen AI

We're seeking a Senior Machine Learning Manager (M60) to lead a cross-functional...
Location
Location
United States
Salary
Salary:
193500.00 - 303150.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years in ML, search, or backend engineering roles, with 3+ years leading teams
  • Strong track record of shipping ML-powered or LLM-integrated user-facing products
  • Experience with RAG systems (vector search, hybrid retrieval, LLM orchestration)
  • Deep experience in either modeling (e.g., LLMs, search, NLP) or engineering (e.g., backend infra, full-stack), with the ability to lead end-to-end
  • Deep understanding of LLM ecosystems (OpenAI, Claude, Mistral, OSS), orchestration frameworks (LangChain, LlamaIndex), and vector databases (Weaviate, Pinecone, FAISS, etc.)
  • Strong product intuition and ability to translate complex tech into valuable user features
  • Familiarity with GenAI evaluation methods: hallucination detection, groundedness scoring, and human-in-the-loop feedback loops
  • Master’s or PhD in Computer Science, Machine Learning, or related field preferred—or equivalent practical experience
Job Responsibility
Job Responsibility
  • Lead the vision, design, and execution of LLM-powered AI products, leveraging advance AI modeling (e.g. SLM post-training/fine-tuning), RAG architectures and hybrid ranking system
  • Define system architecture across retrievers, rankers, orchestration layers, prompt templates, and feedback mechanisms
  • Work closely with product and design teams to ensure delightful, fast, and grounded user experiences
  • Build and manage a cross-disciplinary team including ML engineers, backend/frontend engineers, and applied scientists
  • Foster a culture of E2E ownership — empowering the team to move from prototype to production quickly and iteratively
  • Mentor individuals to grow in both technical depth and product acumen
  • Shape the technical roadmap and long-term strategy for GenAI search across Atlassian’s product suite
  • Partner with platform and infra teams to scale inference, evaluate performance, and integrate usage signals for continuous improvement
  • Champion data quality, grounding, and responsible AI practices in all deployed features
What we offer
What we offer
  • health and wellbeing resources
  • paid volunteer days
  • Fulltime
Read More
Arrow Right