This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Office of the Chief Technology Officer (OCTO) is building the next generation of intelligent, agentic systems that leverage large language models (LLMs) to automate complex enterprise workflows. We seek a Research Engineer to design, build, and productionize multi-agent platforms that extend Teradata’s analytical capabilities into autonomous, AI-driven pipelines. This role sits at the frontier of applied AI research and platform engineering. You will work directly with state-of-the-art LLMs, agentic orchestration frameworks, and Teradata’s data ecosystem to create robust, scalable systems that enable autonomous reasoning, decision-making, and action across enterprise environments.
Job Responsibility:
Design and implement multi-agent systems that coordinate specialized LLM-powered agents to solve complex, multi-step analytical and operational tasks
Design agent orchestration patterns including task decomposition, inter-agent communication, tool use, memory management, and feedback loops
Build scalable agentic pipelines that integrate with Teradata’s data platform, enabling agents to query, analyze, and act on enterprise data autonomously
Design and manage LLM inference pipelines optimized for latency, throughput, and cost across cloud and on-premises deployments
Evaluate, benchmark, and select appropriate foundation models (open-source and proprietary) for specific agentic tasks within the Teradata ecosystem
Implement advanced prompting strategies including chain-of-thought, retrieval-augmented generation (RAG), and tool-augmented reasoning to maximize agent reliability and accuracy
Build and extend internal agentic SDKs and frameworks that enable research and product teams to rapidly develop and deploy agent-based applications
Integrate with leading agentic platforms and toolkits (e.g., LangChain, LlamaIndex, AutoGen, CrewAI, Anthropic Claude SDKs) and adapt them for enterprise-grade reliability
Develop reusable agent components, tool connectors, and evaluation harnesses that accelerate the path from prototype to production
Collaborate with research teams to translate cutting-edge LLM and agent research into production-ready platform capabilities
Work with product and service organizations to embed agentic workflows into Teradata’s customer-facing solutions
Communicate agentic system design, tradeoffs, and capabilities clearly to technical and non-technical stakeholders across multidisciplinary teams
Requirements:
Deep hands-on expertise with large language model APIs and inference frameworks (e.g., OpenAI, Anthropic, Mistral, vLLM, Ollama, HuggingFace Transformers)
Strong practical experience designing and building multi-agent systems using agentic SDKs and orchestration frameworks such as LangChain, LlamaIndex, AutoGen, CrewAI, or equivalent
Proficiency in Python and experience building production-grade AI/ML services with clean, well-documented, testable code
Solid understanding of RAG architectures, vector databases (e.g., Pinecone, Weaviate, pgvector), and knowledge retrieval patterns
Experience with prompt engineering, LLM evaluation methodologies, and strategies for improving agent reliability and reducing hallucination
Familiarity with LLM inference optimization techniques including quantization, batching, caching, and model serving infrastructure
5+ years of software engineering experience, with at least 2 years focused on LLM-based systems, agentic workflows, or applied AI research
Bachelor’s degree in Computer Science, Artificial Intelligence, or a related field, or equivalent demonstrated expertise
Proven track record designing and shipping multi-agent or LLM-powered systems into production environments
Experience collaborating across research and engineering teams to move from prototype to scalable, maintainable platform capability
Strong analytical and problem-solving abilities with meticulous attention to system reliability, agent behavior, and edge-case handling
Nice to have:
Experience building or contributing to agentic SDK frameworks or open-source LLM tooling
Familiarity with Model Context Protocol (MCP) or similar standards for tool-augmented LLM systems
Background with reinforcement learning from human feedback (RLHF), fine-tuning, or model alignment techniques
Experience with distributed systems and high-availability infrastructure for LLM serving at scale
Knowledge of AI governance, safety frameworks, and responsible deployment practices for autonomous agent systems
Prior work on AI/ML products within data warehousing, analytics, or enterprise software environments