This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We’re seeking a Staff Product Security Engineer with deep AI/ML security expertise to strengthen Crusoe’s security posture across applications, infrastructure, and distributed AI systems. This is a highly technical role focused on advanced penetration testing, AI/ML attack surface research, and building secure-by-design guardrails that engineering teams rely on. You’ll operate at the intersection of offensive security, AI systems, and production engineering; owning security outcomes end-to-end while influencing system design across the organization.
Job Responsibility:
Performing advanced manual penetration testing across complex applications, infrastructure, Kubernetes environments, and distributed microservice ecosystems
Leading offensive security initiatives including red team operations, adversary simulation, and security research
Securing AI/ML systems end-to-end, including LLM pipelines, vector databases, RAG architectures, and agentic workflows
Identifying and researching novel attack surfaces unique to LLMs and autonomous systems, contributing to internal and external AI security research
Influencing secure system design across the SDLC, embedding security into CI/CD pipelines, container images, and deployment workflows
Integrating and operationalizing security tooling (SAST, DAST, SCA, container scanning) and driving remediation of complex application-layer vulnerabilities
Building internal security guardrails such as hardened base images, reusable libraries, and policy-as-code frameworks
Developing production-grade security tooling and leading cross-functional security programs from design through deployment
Requirements:
8-10 years of deep hands-on experience in offensive security, including manual penetration testing, red team operations, and adversary simulation
Familiarity with modern C2 frameworks (e.g., Cobalt Strike, Sliver, Havoc), exploit development, and security research
Strong expertise across the AI/ML stack, including MLOps, inference architectures, vector databases, RAG, and agentic frameworks (e.g., ReAct, Reflexion)
Experience building, deploying, and securing LLM pipelines and AI workflows in Kubernetes and/or bare-metal environments
Strong software engineering foundations with experience shipping production code in Go, Python, or Rust
Hands-on experience securing Kubernetes, containers, VMs, and CI/CD environments
Deep understanding of application security vulnerabilities, secure coding practices, and distributed system design
Demonstrated ability to lead complex, cross-functional security initiatives end-to-end
Strong communication skills with the ability to influence both engineering teams and executive stakeholders
Nice to have:
Public contributions to offensive security or AI security research (talks, blogs, tooling, CVEs, etc.)
Experience building internal red team or adversary simulation programs
Background in high-performance computing, AI infrastructure, or cloud-native platform security
Experience designing policy-as-code frameworks at scale