This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
AI is transforming how organizations operate, and with it comes a new frontier of governance. Microsoft’s Azure AI Foundry team is looking for a Principal Product Manager to lead the AI Governance area, defining how enterprises manage trust, compliance, and control across their AI systems. As AI agents and copilots become central to enterprise workflows, this role drives the development of governance experiences that unify security, policy, and observability across the AI lifecycle. You’ll shape how customers ensure responsible operation of AI agents through integrations with Microsoft Entra, Purview, Defender, and AI Foundry, building the foundation for continuous compliance and agentic trust at scale. This is a highly visible and technical product role at the intersection of AI safety, compliance engineering, and enterprise infrastructure. You will partner with cross-company teams in Responsible AI (RAI), OCTO, Purview, Entra, and Azure Security to bring AI governance into the core of the Microsoft cloud ecosystem — enabling every organization to innovate confidently and responsibly.
Job Responsibility:
Lead the AI Governance product area within Azure AI Foundry — defining the long-term vision, strategy, and roadmap for policy management, compliance automation, and regulatory readiness
Design and deliver core governance experiences, including agent-level policies, data sensitivity signals, prohibited action controls, and AI system compliance dashboards
Integrate Foundry governance with Microsoft’s broader security and compliance stack — Entra (identity and A2A policies), Purview (data classification and DLP), and Defender (threat insights)
Translate emerging AI regulations (EU AI Act, ISO 42001, NIST AI RMF) into actionable platform capabilities and customer experiences
Partner with Responsible AI researchers and engineering teams to operationalize ethical principles into measurable safeguards and evaluators
Collaborate with customers, industry bodies, and policymakers to help shape standards for trustworthy AI deployment
Establish and track success metrics (governance coverage, compliance posture, customer adoption), ensuring measurable impact and clarity across engineering and partner orgs
Represent the product in executive and customer forums, evangelizing Microsoft’s approach to responsible, governed AI systems
Requirements:
Bachelor’s Degree AND 8+ years in product management, program management, or technical leadership roles OR equivalent experience
Ability to meet Microsoft, customer and/or government security screening requirements
Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter
Deep familiarity with AI/ML systems, model governance, or enterprise compliance frameworks
Proven experience driving cross-functional initiatives with engineering, security, and policy teams
Excellent communication skills with demonstrated ability to influence across organizational boundaries
10+ years of product management or applied AI experience, ideally in enterprise cloud or responsible AI domains
Knowledge of AI governance frameworks (EU AI Act, NIST AI RMF, ISO 42001, SOC 2, etc.)
Experience building or integrating security, compliance, or observability products
Familiarity with agentic AI systems and associated risk classes (e.g., sensitive data leakage, prohibited actions, task drift, jailbreaks)
Hands-on technical depth to collaborate effectively with engineers and architects
Solid storytelling and executive communication skills
ability to inspire trust and drive alignment in complex, cross-org environments