This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At Teradata, we're not just managing data; we're unleashing its full potential. Our ClearScape Analytics™ platform and pioneering Enterprise Vector Store are empowering the world's largest enterprises to derive unprecedented value from their most complex data. We're rapidly pushing the boundaries of what's possible with Artificial Intelligence, especially in the exciting realm of autonomous and agentic systems. We’re building intelligent systems that go far beyond automation — they observe, reason, adapt, and drive complex decision-making across large-scale enterprise environments. As a member of our AI engineering team, you’ll play a critical role in designing and deploying advanced AI agents that integrate deeply with business operations, turning data into insight, action, and measurable outcomes. You’ll work alongside a high-caliber team of AI researchers, engineers, and data scientists tackling some of the hardest problems in AI and enterprise software — from scalable multi-agent coordination and fine-tuned LLM applications, to real-time monitoring, drift detection, and closed-loop retraining systems. If you're passionate about building intelligent systems that are not only powerful but observable, resilient, and production-ready, this role offers the opportunity to shape the future of enterprise AI from the ground up.
Job Responsibility:
You’ll play a critical role in designing and deploying advanced AI agents that integrate deeply with business operations, turning data into insight, action, and measurable outcomes
You’ll work alongside a high-caliber team of AI researchers, engineers, and data scientists tackling some of the hardest problems in AI and enterprise software — from scalable multi-agent coordination and fine-tuned LLM applications, to real-time monitoring, drift detection, and closed-loop retraining systems
Requirements:
Experience working with modern data platforms like Teradata, Snowflake, and Databricks
Passion for staying current with AI research, especially in the areas of reasoning, planning, and autonomous systems
You are an excellent full stack engineer who codes daily and owns systems end-to-end
Build intuitive UI with user-friendly natural language interfaces (e.g., chatbots, AI assistants) to allow users to interact with the data platform using natural language queries
Strong engineering background (Python/Java/Golang, API integration, backend frameworks)
Strong system design skills and understanding of distributed systems
You’re obsessive about reliability, debuggability, and ensuring AI systems behave deterministically when needed
Hands-on experience with Machine learning & deep learning frameworks: TensorFlow, PyTorch, Scikit-learn
Hands-on experience with LLMs, agent frameworks (LangChain, AutoGPT, ReAct, etc.), and orchestration tools
Experience with AI observability tools and practices (e.g., logging, monitoring, tracing, metrics for AI agents or ML models)
Solid understanding of model performance monitoring, drift detection, and responsible AI principles
A Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field
A genuine excitement for AI and large language models (LLMs) is a significant advantage
UI/UX Development: Proficiency in JavaScript frameworks (React.js, Vue.js, Angular) and CSS libraries (Bootstrap, Material-UI)
Design, develop, and deploy agentic systems integrated into the data platform
3+ years of experience in software architecture, backend systems, or AI infrastructure
Strong knowledge of LLMs, RL, or cognitive architectures is highly desirable
Passion for building safe, human-aligned autonomous systems
Experience in software development (Python, Go, or Java preferred)
Familiarity with backend service development, APIs, and distributed systems
Interest or experience in LLMs, autonomous agents, or AI tooling
Familiarity with containerized environments (Docker, Kubernetes) and CI/CD pipelines
Experience with AI observability tools and practices (e.g., logging, monitoring, tracing, metrics for AI agents or ML models)
Build dashboards and metrics pipelines to track key AI system indicators: latency, accuracy, tool invocation success, hallucination rate, and failure modes
Integrate observability tooling (e.g., OpenTelemetry, Prometheus, Grafana) with LLM-based workflows and agent pipelines
Nice to have:
Research experience or contributions to open-source agentic frameworks
You're knowledgeable about open-source tools and technologies and know how to leverage and extend them to build innovative solutions