This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a member of BlackRock’s Product Security Engineering organization, the AI Security Engineer will focus on securing AI‑enabled systems across both internal enterprise platforms and customer‑facing products. This includes LLM‑powered applications, agent‑based workflows, retrieval‑augmented generation (RAG) pipelines, and tool‑driven integrations. In this role, you will conduct hands‑on security testing, threat modeling, and security design reviews for AI systems, while also building practical security tooling and guardrails that integrate directly into engineering workflows and CI/CD pipelines. You will work closely with AI platform teams, product engineers, and security partners to identify risks early, drive remediation, and improve secure‑by‑default patterns for AI development. This is a senior individual‑contributor role aligned to BlackRock’s Vice President level.
Job Responsibility:
Drive security outcomes for AI systems across internal enterprise platforms and client‑facing products
Perform security testing and threat modeling for AI‑enabled applications, agent workflows, and tool ecosystems
Identify and remediate risks such as prompt injection, unsafe tool usage, data exposure, and insecure execution paths
Partner with engineering teams early in the design lifecycle to influence secure AI architectures
Design and implement AI security guardrails and enforcement mechanisms
Build and deploy custom security tooling and automated tests integrated into CI/CD pipelines
Contribute to firm‑wide standards and best practices for secure AI development and deployment
Act as a technical security advisor to engineering teams and leadership
Requirements:
4+ years of experience in software engineering, product security, application security, or platform engineering
Demonstrated ability to write code and scripts to support security testing, tooling, and analysis
Experience working with distributed, cloud‑native platforms
Proven ability to analyze complex systems and communicate security risk clearly and pragmatically
Nice to have:
Experience securing AI‑enabled systems or building AI/ML platforms and applications at scale
Background in product security, application security, platform security, and/or AI systems engineering
Experience with LLM applications, agent‑based workflows, RAG pipelines, or tool/plugin architectures
Familiarity with AI security risks such as prompt injection, jailbreaks, tool misuse, and data leakage
Experience building security automation, guardrails, or CI/CD‑integrated testing frameworks