This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We’re hiring Code Reviewers with deep Java expertise to review evaluations completed by data annotators assessing AI-generated Java code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality.
Job Responsibility:
Review and audit annotator evaluations of AI-generated Java code
Assess if the Java code follows the prompt instructions, is functionally correct, and secure
Validate code snippets using proof-of-work methodology
Identify inaccuracies in annotator ratings or explanations
Provide constructive feedback to maintain high annotation standards
Work within Project Atlas guidelines for evaluation integrity and consistency
Requirements:
5–7+ years of experience in Java development, QA, or code review
Strong knowledge of Java syntax, debugging, edge cases, and testing
Comfortable using code execution environments and testing tools
Excellent written communication and documentation skills
Experience working with structured QA or annotation workflows
English proficiency at B2, C1, C2, or Native level
Nice to have:
Experience in AI training, LLM evaluation, or model alignment
Familiarity with annotation platforms
Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines