This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a Principal AI Safety Program Manager to join us and to create strategies on improving our approach to AI Security & Safety to deliver on that promise. We are the Artificial Generative Intelligence Security (AeGIS) team, and we are charged with ensuring justified confidence in the safety of Microsoft’s generative AI products. This encompasses providing an infrastructure for AI safety & security; serving as a coordination point for all things AI incident response; researching the quickly evolving threat landscape; red teaming AI systems for failures; and empowering Microsoft with this knowledge. We partner closely with product engineering teams to mitigate and address the full range of threats that face AI services – from traditional security risks to novel security threats like indirect prompt injection and entirely AI-native threats like the manufacture of NCII or the use of AI to run automated scams.
Job Responsibility:
Designing methodologies for teams to build safely and effectively, with a clear eye towards those being directly useful and applicable by real teams
Partnering with AI creation platforms that target non-pro AI builders to incorporate methodologies that allow non-pro AI builders to understand the risks of what their building and empowering them to make informed risk tradeoffs
Ideating and prototyping tools that help both pro- and non-pro AI builders understand the risks of what they’re building throughout its development – from ideation to deployment
Work with our education and training team to develop content in a range of formats (presentations, interactive workshops, labs and whitepapers) to bring the knowledge of how to build AI safety and securely to a wide audience
Build collaborative relationships with other stakeholder teams working on Responsible AI to scale out AI Safety methodologies
Build collaborative relationships with other security teams to scale out AI security methodologies
Help define new policies and procedures (or changes to existing ones) that ensure that customers can have justified trust in Microsoft’s AI services
Embody our Culture and Values
Requirements:
Bachelor's Degree AND 6+ years experience in engineering, product/technical program management, data analysis, or product development OR equivalent experience
3+ years of experience managing cross-functional and/or cross-team projects
Candidates must be able to meet Microsoft, customer and/or government security screening requirements are required for this role
Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter
Nice to have:
Bachelor's Degree AND 12+ years experience engineering, product/technical program management, data analysis, or product development OR equivalent experience
8+ years of experience managing cross-functional and/or cross-team projects
5+ years product experience in any of the safety disciplines in computer science (abuse, security, privacy, etc)
5+ years experience in assessing systems for practical security, privacy, or safety flaws and helping teams mitigate identified risks
3+ years experience with a socio-technical safety space (e.g. online safety, privacy)
2+ years experience using AI to build tools and/or agents AND 1+ year(s) of experience reading and/or writing code (e.g., sample documentation, product demos)