This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Meta is seeking a QA Engineering Lead with expertise in AI product and model testing to drive the quality vision for our next-generation AI-powered products. In this role, you will lead the design and execution of comprehensive test strategies for AI models spanning text, image, and voice, ensuring our solutions are robust, reliable, and ethically sound. You will work on products built with cutting-edge technology, serving billions of users worldwide, and play a pivotal role in shaping the future of AI quality at Meta.
Job Responsibility:
Build and foster a quality-driven engineering environment that enables rapid, confident product releases, ensuring that quality is embedded throughout the development lifecycle
Develop and implement robust evaluation processes for AI models, including prompt engineering, scenario-based, and adversarial testing for text, image, and voice AI systems
Drive the quality for products and features, assess risks, and ensure features ship with a high quality bar, balancing speed and experience
Plan, develop, and execute comprehensive test strategies across core Meta products and platforms, leveraging both manual and automated approaches
Lead quality assurance efforts that align with product objectives, developing scalable solutions to support rapid product iteration and deployment
Solve cross-platform engineering challenges and contribute impactful ideas to improve quality, reliability, and user experience across diverse product surfaces
Implement and evolve QA processes to obtain effective test signals and scale testing efforts across multiple products, ensuring continuous improvement
Define quality metrics and implement measurements to determine test effectiveness, testing efficiency, and overall product quality, using data-driven insights to guide decisions
Partner with engineering and infrastructure teams to leverage automation for scalable solutions, preventing regressions and ensuring the reliability of products and AI models
Apply Responsible AI practices including safety, ethics, alignment, and explainability by building safeguards and quality controls to validate AI outputs, ensuring transparency, and compliance with ethical standards
Requirements:
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
5+ years of experience in quality assurance, test engineering, and test automation
1+ years of hands-on experience testing AI-powered products (web, iOS, and/or Android) that generate or transform text, images, and/or voice, including end-to-end feature validation and user experience quality
1+ years of hands-on experience testing, debugging, and evaluating LLM/multimodal model behavior, including defining and applying quality standards for accuracy, relevance, grounding, safety/policy compliance, and cultural/locale sensitivity, and driving model-quality regressions to resolution
Experience effectively utilizing AI technologies and tools (e.g., large language models, agents, etc.) to enhance QA workflows
Experience collaborating cross-functionally and contributing to technical decisions through influence, communication, and execution
Experience changing priorities quickly and adapt effectively in a fast-moving product development cycle
Nice to have:
Experience in Python, PHP, Java, C/C++, or equivalent programming language
Experience leading and executing black-box and white-box testing strategies (test planning, coverage, execution, and triage)
Experience partnering with AI/ML research and engineering teams, and communicating effectively with technical and non-technical stakeholders at multiple levels
Experience building AI-assisted test automation/test agents using LLMs and agent frameworks (e.g., internal or industry tools) to generate, execute, and maintain tests
Experience using analytics to define, measure, and improve QA operational KPIs (e.g., defect escape rate, detection latency, automation coverage, flake rate)
Experience designing and building test automation frameworks that leverage generative AI for test creation, prioritization, and maintenance