This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The AI Integrity & Provenance team builds post‑deployment safety, abuse monitoring, and content authenticity systems for frontier AI models and experiences across Microsoft. We are looking for a Principal Product Manager to own strategy and execution for integrity and provenance capabilities that enable responsible deployment, regulatory compliance, and real‑world abuse detection of AI systems at scale.
Job Responsibility:
Lead product strategy for AI Integrity Foundations across provenance, abuse monitoring, incident response, and social listening, enabling safe, accountable, and resilient deployment of AI systems and agents at scale
Define the long-term vision, strategy, and roadmap for foundational integrity capabilities within Azure AI Foundry, ensuring consistent post-deployment safeguards across models, applications, and agentic workflows
Improve abuse monitoring and detection systems that identify and mitigate real-world AI threats and misuse, including prompt injection, jailbreaks, data exfiltration, malicious tool calls, coordinated abuse, model exploitation and other novel vectors
Own incident response product capabilities, enabling rapid detection, triage, investigation, and remediation of AI-related safety and security incidents, with clear metrics for MTTR, coverage, and enforcement effectiveness
Evolve provenance and content authenticity capabilities, supporting traceability, attribution, auditability, and regulatory requirements for trustworthy AI outputs
Partner closely with security engineers, red teams, AI researchers, and integrity analysts to translate emerging attack patterns, abuse signals, and novel harm vectors into durable, productized protections
Integrate AI integrity and security capabilities with Microsoft’s broader ecosystem, including Defender (threat detection and response), Entra (identity and access control), and Purview (data protection, governance, and compliance)
Drive 0‑to‑1 product development, taking new integrity and safety concepts from early experimentation through production launch, customer adoption, and operational maturity
Establish and own metrics and dashboards for AI integrity posture and product success, including detection coverage, signal quality, response effectiveness, customer impact, and regulatory readiness
Requirements:
Bachelor's Degree AND 8+ years experience in product/program management OR equivalent experience
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role
Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter
Nice to have:
Bachelor's Degree AND 12+ years experience in product/program management OR equivalent experience
4+ years experience taking a product, feature, or experience to market (e.g., design, addressing product market fit, and launch, internal tool/framework)
6+ years experience improving product metrics for a product, feature, or experience in a market (e.g., growing customer base, expanding customer usage, avoiding customer churn)
6+ years experience disrupting a market for a product, feature, or experience (e.g., competitive disruption, taking the place of an established competing product)
Platform PM experience driving foundational or horizontal capabilities
Demonstrated systems‑level thinking in safety, security, or reliability‑critical domains
Experience shipping AI platforms or trust, safety, or integrity‑focused products into production
Experience with AI security testing, evaluation, or automated red‑teaming techniques for generative AI or agentic systems
Familiarity with post‑deployment AI monitoring, incident response workflows, and operational metrics such as detection coverage, signal quality, and response effectiveness
Exposure to enterprise governance, data protection, and compliance systems, particularly as they relate to AI deployments
Background working on safety‑critical, security‑critical, or high‑risk systems operating at global scale