This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At Microsoft AI, we are on a mission to train the world’s most capable AI frontier models, pushing the boundaries of scale, performance, and product deployment. Our vision is to pursue humanist superintelligence, that is designed to remain controllable, aligned, and firmly in service to humanity. The Responsible AI team is hiring a Technical Program Manager to help bring that vision to life. Reporting into the sub-team supporting Microsoft Superintelligence and foundation model development, you will establish and operationalize AI safety, governance and responsibility practices across the model lifecycle. You will work directly with world-leading researchers to identify risks, set responsible AI standards, and ensure that appropriate mitigations are in place before our models are released. As part of a multidisciplinary group of AI & society experts in the broader Responsible AI teams, you will operationalize and shape governance frameworks at the frontier of the field, addressing technical, ethical and regulatory challenges.
Job Responsibility:
Collaborate with fast-paced model development teams to identify risks, develop policies and evaluations and ensure appropriate mitigations are in place prior to deployment
Establish, improve, and operationalize foundation model governance processes, ensuring Microsoft Superintelligence addresses key risks and develops models that benefit people and society
Produce analyses and recommendations to support Microsoft Superintelligence decision-making across a wide range of AI & society topics
Drive organizational clarity on complex AI responsibility and safety questions, working through ambiguity at pace with a high degree of autonomy and judgment
Develop and embed Responsible AI best practices across Microsoft Superintelligence workflows
Partner with colleagues in the Office of Responsible AI and Microsoft's Corporate, External and Legal Affairs (CELA) team to develop and apply innovative governance processes responsive to a dynamic technical and regulatory landscape
Requirements:
Related bachelor's degree AND experience in AI policy, AI governance or a related field OR equivalent experience
Demonstrated experience identifying and addressing risks related to frontier AI through both technical and non-technical measures
Deep understanding of current challenges and priorities in AI responsibility and safety
Proven ability to collaborate with highly technical stakeholders, identify novel risks from technical specifications and guide mitigation efforts
Experience standing up and operationalizing governance processes or managing complex programs requiring rigorous yet fast-paced execution and stakeholder management
Track record of delivering on complex cross-functional projects in ambiguous, rapidly changing environments and demands
Nice to have:
Experience drafting model-level policies or evaluating foundation models
Research background in fields related to AI and society
Broad perspective on AI risks, from content safety and psychosocial harms, to frontier risks
Familiarity with frontier AI governance frameworks and emerging AI regulation