This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
LogicMonitor is looking for a highly skilled QA/ SDET with 2 to 3 years of experience to build and scale automated test frameworks for Generative AI features across our observability platform. In this role, you will ensure the quality, reliability, safety, and performance of LLM-based workflows, including AI assistants, Retrieval-Augmented Generation (RAG) pipelines, AI-generated incident summaries, auto-remediation agents, tool calling, and AI-driven insights. You will work closely with engineering, product, and applied AI teams to validate AI experiences in production.
Job Responsibility:
Test Strategy for GenAI Features
Automation Framework & End-to-End Testing
AI Evaluation (LLM Testing)
Reliability, Performance & Scale Testing
Safety, Security & Compliance Testing
Observability & Debuggability for AI Testing
Requirements:
2+ years of experience as an SDET / QA Automation Engineer
Strong programming skills in Python or Java
Strong hands-on automation experience
Proven ability to build test strategies
Experience testing LLM-based systems
Strong understanding of common GenAI failure patterns
Ability to create evaluation datasets and rubrics for AI correctness