This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Radancy is the global leader in talent acquisition software, helping enterprises worldwide transform their hiring through our AI‑powered platform. As a Senior Software Engineer – Agentic AI on our AI Product & Engineering Team, you’ll help create new Agentic AI solutions that connect leading brands with top talent. Your creative problem‑solving will enhance our platform’s impact while accelerating your professional growth.
Job Responsibility:
Architect, build, and operate cloud-native backend services that power AI-driven recruiter workflows
Design and implement agentic AI systems using frameworks such as Google ADK, LangGraph, or similar, building multi-step reasoning loops, tool-use pipelines, and agent-to-agent (A2A) communication patterns for production recruiter automation
Build, deploy, and maintain MCP servers to expose backend capabilities as structured tool endpoints consumable by AI agents, ensuring schema correctness, session management, and tenant-safe execution
Design and deploy scalable AI/LLM services using containerization and orchestration technologies in cloud environments
Integrate LLM APIs, embedding services, and ML inference endpoints into distributed systems with strong API design, versioning, and fault tolerance
Implement asynchronous processing, event-driven architectures and durable state management for AI workflow orchestration
Build and maintain CI/CD pipelines to automate testing, deployment, and monitoring of AI-enabled services
Establish observability practices (metrics, tracing, logging, alerting) to monitor model performance, latency, cost, and reliability in production
Optimize inference workloads for performance, scalability, and cost efficiency, including autoscaling and concurrency management
Partner with Data Science to bring models to production, implement evaluation pipelines, and support model lifecycle management
Requirements:
5+ years of experience building and operating production backend systems, with hands-on exposure to AI/ML-powered applications preferred
Strong proficiency in a strongly typed language such as Scala, Java, or similar strongly preferred
Experience integrating LLM APIs, embedding models, or ML inference services into distributed systems
Experience building or consuming agent frameworks (e.g., Google ADK, LangGraph, or AutoGen) to orchestrate multi-step, tool-using AI agents in production environments
Familiarity with Model Context Protocol (MCP) or similar tool-serving standards
Practical understanding of NLP, LLM behavior, prompt design, retrieval-augmented generation (RAG), and structured output patterns
Experience deploying and scaling AI-enabled services in AWS/GCP cloud environments
Hands-on experience with containerization and orchestration (Docker, Kubernetes)
Experience designing highly concurrent, fault-tolerant systems using async processing, queues, pub/sub, or event-driven architectures
Familiarity with CI/CD pipelines, infrastructure-as-code (Terraform or similar), and automated deployment workflows
Understanding model evaluation, monitoring, drift detection, and AI system observability in production environments
Awareness of responsible AI practices, data security, and compliance considerations when deploying AI systems at an enterprise scale
Quick thinking and acting with minimal/no supervision
Self-driven, independent, creative, and eager to learn new skills
Ability to work effectively with incomplete information
Great communication skills
Experience with and desire to work for an asynchronous, remote, and global team