This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
AI is rapidly becoming embedded in enterprise applications, agents, and copilots - creating a new security frontier. Microsoft’s Azure AI Foundry team is looking for a Principal Product Manager – AI Security to help define how enterprises secure AI systems across their lifecycle, from development to deployment and operation. As organizations deploy AI agents that can access data, call tools, and take actions, traditional security models are no longer sufficient. This role will help build the security foundation for agentic systems, addressing emerging threats such as prompt injection, sensitive data exfiltration, unsafe tool usage, model misuse, and adversarial manipulation. You will lead product efforts to help enterprises detect, prevent, and govern AI security risks, working closely with security engineers, CISOs, and AI developers. This includes integrating AI security capabilities with Microsoft’s broader security stack - including Microsoft Defender, Entra, Purview, and Azure AI Foundry - to deliver a comprehensive security platform for AI systems. This is a highly technical product role at the intersection of AI systems, cybersecurity, and enterprise cloud infrastructure. You will partner across Microsoft’s Responsible AI, Security, and Azure teams to define how organizations protect AI systems and safely deploy AI-powered applications at scale.
Job Responsibility:
Lead the AI Security product area within Azure AI Foundry, defining the long-term vision, strategy, and roadmap for securing AI applications and agents
Design and deliver security capabilities that help organizations identify, mitigate, and monitor AI attack patterns, including prompt injection, jailbreaks, data exfiltration, malicious tool calls, and model misuse
Partner with security engineers, red teams, and AI researchers to translate emerging AI attack techniques into productized protections
Integrate AI security capabilities with Microsoft’s broader security ecosystem, including Defender (threat detection), Entra (identity and access), and Purview (data protection and governance)
Work closely with enterprise security leaders, CISOs, and security practitioners to understand real-world AI security challenges and design solutions that fit existing security operations
Drive 0-to-1 product development, bringing new AI security capabilities from early concept and experimentation through production launch and adoption
Establish metrics for AI security posture and product success, including risk coverage, detection efficacy, and customer adoption
Represent Microsoft’s approach to AI security and safe agent deployment in customer engagements, industry conversations, and internal strategy discussions.
Requirements:
Bachelor's Degree AND 8+ years experience in product/service/program management or software development OR equivalent experience
Ability to meet Microsoft, customer and/or government security screening requirements
Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.
Nice to have:
Familiarity with modern cybersecurity concepts, such as threat modeling, secure system architecture, and attack surface analysis
Extensive familiarity with AI system architectures, including LLM applications, agents, and tool-using AI systems
Understanding of common AI attack patterns, such as prompt injection, jailbreaks, sensitive data leakage, indirect prompt injection, and adversarial manipulation
Experience working with security engineers, red teams, or security operations teams
Ability to engage credibly with CISOs, security architects, and security engineering teams
Demonstrated experience building 0-to-1 software products or platforms
Experience delivering products in enterprise security, developer tools, or cloud infrastructure
Familiarity with AI security testing, evaluation, or automated red teaming techniques
Strong collaboration skills with engineering, research, and security teams across large organizations
Proven communication skills and ability to translate complex technical security concepts into clear product strategy and customer value.