This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Teradata is building the next generation of AI-native analytics, enabling customers to deploy production-grade Generative AI systems directly where enterprise data lives. We are looking for a Senior AI Engineer to play a key role in designing and building Teradata’s vector store and retrieval infrastructure, powering RAG, multimodal AI, agentic workflows, and semantic search at enterprise scale. This role is ideal for an engineer who thrives at the intersection of LLMs, information retrieval, and distributed systems, and wants to work on core platform capabilities, not just application demos.
Job Responsibility:
Design and implement vector store capabilities integrated with Teradata’s analytics platform, including indexing, storage, retrieval, and query optimization
Build end-to-end RAG pipelines, including data ingestion and chunking strategies, embedding generation and lifecycle management, retrieval (dense, sparse, and hybrid search), context assembly and prompt orchestration
Develop and optimize semantic search algorithms and ranking strategies for enterprise workloads
Design agentic AI patterns, including tool calling, planning, memory, and orchestration
Implement guardrails for safety, reliability, and governance (hallucination mitigation, rounding, policy enforcement)
Build and maintain RAG evaluation frameworks, including relevance, faithfulness, accuracy, and cost metrics
Collaborate with product, research, and platform teams to translate customer use cases into scalable features
Benchmark Teradata’s vector store and RAG capabilities against industry alternatives (e.g., cloud and open-source solutions)
Contribute to technical design reviews, architecture decisions, and long-term AI platform strategy.
Requirements:
BS/MS/PhD in Computer Science, AI/ML, or a related field
3+ years of software engineering experience with a strong focus on backend systems
Hands-on experience with vector databases or vector search systems
Practical experience building LLM-powered applications, especially RAG systems
Strong understanding of embeddings and similarity search, data chunking and context optimization, dense vs sparse vs hybrid retrieval, semantic search and relevance ranking
Proficiency in Python (and/or Java)
experience with production-grade systems
Experience working with large-scale data and performance-sensitive systems.
Nice to have:
Experience with multimodal embeddings and retrieval
Familiarity with agent frameworks (e.g., LangChain, LangGraph, or equivalent)
Experience implementing AI guardrails and evaluation frameworks
Exposure to cloud platforms (AWS, Azure, or GCP)
Experience with distributed systems or analytics platforms.