This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Prompt Injection Specialist will design and execute structured adversarial prompt testing against a LLM chatbot. The focus is exclusively on prompt-layer vulnerabilities: jailbreaks, direct and indirect prompt injection, instruction override, and boundary attacks. This is not a cybersecurity or infrastructure penetration testing role.
Job Responsibility:
Design and execute structured adversarial prompt testing against a LLM chatbot
Focus exclusively on prompt-layer vulnerabilities: jailbreaks, direct and indirect prompt injection, instruction override, and boundary attacks
Requirements:
Hands-on experience with LLM prompt injection, jailbreaking, and adversarial prompt design
Strong understanding of chatbot architectures, system prompt structures, and guardrail mechanisms
Familiarity with OWASP LLM Top 10, MITRE ATLAS, and relevant adversarial ML frameworks
Experience designing structured prompt test sets with coverage metrics
Ability to define failure taxonomies and severity classification for prompt-layer attacks
Proficiency with common LLM APIs and chat interfaces (OpenAI, Anthropic, Azure OpenAI, or equivalent)