This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As Model Evaluation QA Lead, you’ll be the technical owner of model quality assurance across Deepgram’s AI pipeline—from pre-training data validation and provenance through post-deployment monitoring. Reporting to the QA Engineering Manager, you will partner directly with our Active Learning and Data Ops teams to build and operate the evaluation infrastructure that ensures every model Deepgram ships meets objective quality bars across languages, domains, and deployment contexts. This is a hands-on, high-impact role at the intersection of QA engineering and ML operations. You will design automated evaluation frameworks, integrate model quality gates into release pipelines, and drive industry-standard benchmarking—ensuring Deepgram maintains its position as the accuracy and latency leader in voice AI.
Job Responsibility:
Model Evaluation Automation: Design, build, and maintain automated model evaluation pipelines that run against every candidate model before release
Release Gate Integration: Embed model quality checkpoints into CI/CD and release pipelines
Agent & Model Evaluation Frameworks: Stand up and operate evaluation tooling for end-to-end voice agent testing
Active Learning & Data Ingestion Testing: Partner with the Active Learning team to validate data ingestion infrastructure, annotation pipelines, and retraining automation
Industry Benchmark Automation: Automate execution and reporting of industry-standard benchmarks
Language & Domain Validation: Build and maintain test suites for multi-language and domain-specific model validation
Retraining Automation Support: Validate the end-to-end retraining pipeline across all data sources
Manual Test Feedback Loop: Design and operate human-in-the-loop evaluation workflows for subjective quality assessment
Requirements:
4–7 years of experience in QA engineering, ML evaluation, or a related technical role with a focus on predictive and generative model and data quality
Hands-on experience building automated test/evaluation pipelines for ML models and connecting software features
Strong programming skills in Python
experience with ML evaluation libraries, data processing frameworks (Pandas, NumPy), and scripting for pipeline automation
Familiarity with speech/audio ML concepts: WER, SER, MOS, acoustic models, language models, or similar evaluation metrics
Experience with CI/CD integration for ML workflows (e.g., GitHub Actions, Jenkins, Argo, MLflow, or equivalent)
Ability to design and maintain reproducible benchmark environments across multiple model versions and configurations
Strong communication skills—you can translate model quality metrics into actionable insights for engineering, research, and product stakeholders
Detail-oriented and systematic, with a bias toward automation over manual process
Nice to have:
Experience with model evaluation platforms (Coval, Braintrust, Weights & Biases, or custom evaluation harnesses)
Background in speech recognition, NLP, or audio processing domains
Experience with distributed evaluation at scale—running evals across GPU clusters or large dataset partitions
Familiarity with human-in-the-loop evaluation design and annotation pipeline tooling
Experience with multi-language model evaluation and localization quality assurance
Prior work in a company where ML model quality directly impacted revenue or customer SLAs