This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Collaborate with design and engineering teams to integrate responsible AI requirements into technical architecture, data pipelines, and model development processes
Develop and maintain responsible AI design standards, checklists, and frameworks to guide teams throughout the product lifecycle
Integrate Responsible AI activities as part of standard AI software development lifecycle (SDLC) processes to ensure seamless integration
Prepare and implement an Annual Local Responsible AI Plan aligned with global and local AI strategies
Develop and maintain a risk register, ensuring that remediation plans are defined and implemented for all identified non-compliance issues
Conduct AI risk assessments and impact analysis before and during deployment
Provide classification and identification of AI activities into Prohibited, High, Medium, and Low risk categories
Identify potential harms and develop mitigation strategies
Identify control deficiencies, issues, and instances of non-compliance
Build and manage an AI inventory — a centralized register of all AI systems and models in use, including their purpose, categorization, data sources, risk level, and ownership and ensure it is viewed on a regular basis
Monitor adherence to governance policies and track the lifecycle of AI systems from development through decommissioning
Assess and manage responsible AI risks associated with third-party AI vendors, tools, and models used within the organization
Conduct due diligence on external AI suppliers to ensure alignment with the organization's ethical standards, governance policies, and regulatory requirements
Establish contractual and procedural safeguards to ensure third-party AI components meet responsible AI standards
Develop policies and procedures that guide responsible AI practices across the organization
Ensure organizational compliance with relevant legal and regulatory requirements (e.g., PDPL, EU AI Act)
Ensure AI decision-making processes are understandable to relevant stakeholders
Continuously monitor deployed AI systems for unexpected behavior or harm
Lead investigations and corrective actions when AI-related incidents occur
Lifecycle Oversight: Embed Responsible AI controls across the entire AI lifecycle (design, development, testing, deployment, and monitoring), ensuring that all required checkpoints, approvals, and documentation are in place
Model Monitoring & Assurance: Oversee the continuous monitoring of AI systems for bias, performance drift, safety issues, and unintended consequences, and ensure that periodic reviews, audits, and validation activities are conducted
Policies & Standards: Draft, maintain, and continuously enhance Responsible AI policies, guidelines, and control standards
Governance Ownership: Own decisions related to the design, implementation, and ongoing
Requirements:
Previous experience in Privacy or/and technology or/and legal or/and relevant role
Ability to lead & influence cross functional teams