This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Join Yotpo’s high-impact Discover R&D team, where we are revolutionizing how brands and products are found in the AI era. The future of eCommerce, shaped by Large Language Models (LLMs), is already visible in how consumers use tools like ChatGPT or Perplexity for shopping questions. Yotpo Discover is leading the mission to ensure brands remain visible, trusted, and cited in this new, AI-driven world. Operating like an internal startup, our team is fast-paced, hands-on, and driven by curiosity. We rapidly experiment, build, and launch new AI-powered products that redefine brand-shopper connections in the age of conversational search.
Job Responsibility:
Architect Discovery Pipelines: Design and maintain high-throughput, low-latency data pipelines that process billions of events, ensuring reviews and visual UGC are instantly discoverable across channels
Data Modeling for AI/Search: Build and optimize complex data models that support advanced search algorithms, LLM-based product summaries, and personalized shopper experiences
Bridge Engineering & Data Science: Partner with AI Researchers to operationalize machine learning models that calculate “convertibility” and product relevance in real-time
Scale for Growth: Own the end-to-end lifecycle of our data stack, ensuring it can handle the next 10x of business growth while maintaining strict reliability and observability
Independent Impact: Lead technical initiatives from ideation to production, balancing the rapid experimentation needs of the Discover product with the stability of a core R&D system
Promote Best Practices: Drive the evolution of our modern data stack (DBT, Databricks) by implementing “standards-as-code” and automated quality gates
Requirements:
Bachelor’s degree in Computer Science, Mathematics, Industrial Engineering, or an equivalent analytical discipline
4+ years of hands-on experience as a Data Engineer or Backend Engineer with a strong data focus
Advanced proficiency in SQL and deep understanding of modern data warehousing (Databricks, Snowflake, BigQuery, or Redshift)
Proven experience in designing data architectures for complex business domains, specifically those involving high-velocity event streaming or large-scale content indexing
A “builder” mindset with the ability to work independently in a fast-paced, startup-like environment within a global company
Nice to have:
Extensive experience with DBT (Data Build Tool) for modular data modeling
Strong Python skills and experience with workflow tools (e.g., Airflow, Dagster)
Familiarity with integrating AI/LLM technologies (OpenAI, Anthropic) or vector databases
Experience with Spark, Databricks, Kubernetes, and Docker
Experience with Looker, Tableau, or Superset to help product analysts “discover” insights within our own data