This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
You'll be the technical owner for a small portfolio of strategic accounts — typically large enterprises with production environments that matter. 'Owner' means you know their architecture, their team, their roadmap, and their failure modes better than anyone else at our company. You're the person they call when something's on fire, and also the person they call before they decide to build something new.
Job Responsibility:
Architect their AI infrastructure layer. LLM gateways with auth, rate limiting, and observability. Agent-to-agent communication patterns. Securing inference traffic across multi-cloud environments. Most of our customers haven't done this before — you have, or you'll figure it out alongside the engineering team and write the playbook everyone else uses.
Run technical issue resolution end-to-end. When something escalates, you partner with Support and Engineering, drive root cause, and often dig in directly. We expect Principal-level architects to get their hands dirty when it accelerates the outcome — reading code, reproducing issues, writing reference implementations.
Drive deep adoption. You'll consult on performance tuning, deployment patterns, and operational best practices. You'll spot new use cases inside the account and bring them forward.
Influence the product. You sit closer to real production AI workloads than almost anyone in the company. Product Management and Engineering treat your feedback as a primary signal for the roadmap.
Partner with the account team (CSM, AE, SE) on risk, renewal, and expansion — but you're the technical voice in the room, not the commercial one.
Requirements:
5+ years in a customer-facing technical role — Solutions Architect, Customer Engineer, SRE, or Senior Support Engineer at an infra company. You've owned strategic accounts before.
Deep cloud-native chops: Kubernetes, service mesh (Istio, Cilium), API gateways and proxies (Envoy or similar). You've debugged these in production, not just deployed them.
1+ years hands-on with AI/ML infrastructure — LLMs, agentic frameworks, model-serving platforms, inference gateways. You don't need to have trained a model, but you should understand how production AI traffic actually flows.
Scripting/programming comfort in Go, Python, or Bash. You'll write diagnostics, automation, and reference code.
The ability to talk to a platform engineer at 10am and a CTO at 2pm without changing who you are.