This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As an AI Quality Analyst, you will evaluate a new personalization feature for Gemini. You will assess how well the model uses information from your past Gemini conversations, Gmail, Google Search, and YouTube activity to make responses more relevant and helpful. This role requires a unique blend of creativity and analytical rigor. You will actively design prompts from the perspective of your own personal experiences. You will then use your analytical skills to assess the quality of the model’s personalized responses, evaluating dimensions like Grounding, Integration, and Helpfulness.
Job Responsibility:
Design and execute multi-turn conversational prompts (typically 1-5 turns) that require the AI to utilize personal information and experiences
Evaluate AI model responses based on your intent from the starting prompt, checking if personalization was appropriately applied
Analyze responses for ‘Grounding’ issues, ensuring claims about you are supported by evidence and not flawed inferences or hallucinations
Assess ‘Integration’ quality to ensure personal data is woven naturally into responses without robotic “overnarrating”
Rigorously evaluate and stack-rank two model responses side-by-side to determine which is overall more helpful, easy to use, and enjoyable
Write clear, defensible rationales for your comparisons, explicitly referencing where issues or positive aspects occurred in the conversation
Extract and verify “Debug Info” from the model to confirm that chat summaries and data sources were properly utilized
Maintain strict data hygiene by deleting evaluation conversations to prevent them from polluting future chat history
Requirements:
Korean Proficiency: Ability to read and write in Korean with a high degree of comprehension
Exceptional Analytical Thinking: Demonstrated ability to evaluate nuanced and ambiguous AI responses, specifically assessing personalization quality
Creative Prompt Engineering: Experience in designing creative, multi-turn starting prompts based on personal context
Strong Evaluation Acumen: Understanding of personalization concepts, including the ability to identify incorrect personalization, poor inferences, and forced connections
Meticulous Attention to Detail: The ability to review side-by-side model responses and spot subtle differences
Excellent Written Communication: Superior ability to write clear, concise, and structured rationales for model rankings
Ability to provide constructive feedback and detailed annotations
Excellent communication and collaboration skills
Self-motivated and able to work independently in a remote setting
Technical Setup: Desktop/Laptop set up with a good internet connection
BS/BA degree or equivalent experience in a relevant field (e.g., Policy, Law, Ethics, Linguistics, Journalism, Computer Science, or a related analytical field)
Willingness to use your primary personal account (not a testing account) and enable personal data sources for a genuine assessment
Full-time availability (8 hours per day) and 4 hours overlap with a specific time zone
Schedule Flexibility: Full-time availability in your local time zone is required, supporting a global, 24-hour operations team
Nice to have:
Experience in data annotation, AI quality evaluation, content moderation, or a related role is strongly preferred