This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
You will be involved in designing, developing, and executing testing strategies to validate functionality, performance, and user experience, while collaborating with cross-functional teams to identify and resolve defects, and continuously improve testing processes and methodologies, to ensure software quality and reliability.
Job Responsibility:
Development and implementation of comprehensive test plans and strategies to validate software functionality and ensure compliance with established quality standards
Creation and execution automated test scripts, leveraging testing frameworks and tools to facilitate early detection of defects and quality issues
Collaboration with cross-functional teams to analyse requirements, participate in design discussions, and contribute to the development of acceptance criteria, ensuring a thorough understanding of the software being tested
Root cause analysis for identified defects, working closely with developers to provide detailed information and support defect resolution
Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing
Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth
Requirements:
Understanding of Large Language Models (LLM), LangChain, and LangGraph frameworks
Python programming skills with demonstrated ability to architect scalable test automation solutions
Deep hands-on experience with Generative AI and Agentic AI implementations in production environments
Machine Learning model testing and validation methodologies
Natural Language Understanding (NLU) testing strategies
Natural Language Processing (NLP) quality assurance
Knowledge of prompt engineering and LLM evaluation frameworks (RAGAS, DeepEval, etc.)
Familiarity with vector databases for RAG testing
Experience with performance and load testing of Gen AI applications
Understanding of AI ethics, bias detection and responsible AI testing practices
Comprehensive knowledge of AWS services, particularly Bedrock, Lambda and CloudWatch
Experience architecting and implementing enterprise-scale cloud-based testing solutions
Expert proficiency with version control systems (Git, Stash) and branching strategies
Proven experience designing and optimizing CI/CD pipelines using Jenkins, GitLab CI, or similar tools
Deep understanding of Agile/Scrum methodology, SDLC best practices, and DevOps culture
Demonstrated ability to drive quality initiatives and establish testing excellence across organizations
Proven expertise in leading cross-functional collaboration with development, product, and operations teams
Experience establishing quality gates and governance frameworks for Gen AI applications
Knowledge in UI, mobile and API testing methodologies
Advanced proficiency with Behavior-Driven Development (BDD) frameworks and test design patterns
Experience with test management and reporting tools including Jira, Xray, ALM, and TestRail
Nice to have:
Experience with containerization (Docker, Kubernetes) for test environments
Background in data engineering or data quality testing
Publications, conference presentations, or contributions to open-source testing frameworks