This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Role: Senior Data Scientist – Conversational AI, LLM Location: Bangalore Experience: 5+ Years Employment Type: Full-Time --- Role Overview We are looking for a highly skilled Senior Data Scientist to design, develop, and industrialize enterprise-grade Conversational AI and GenAI solutions. The ideal candidate will have deep expertise in Large Language Models, advanced NLP, and scalable AI application development, along with strong experience in data engineering, cloud platforms, and modern AI system architecture. This role involves working across the full lifecycle, from architecture → POC → production deployment, to build robust, scalable, and business-ready AI platforms.
Job Responsibility:
Design, develop, and maintain enterprise-grade Conversational AI systems using modern architectural patterns
Build context-aware AI applications leveraging LLMs, vector databases, and multi-agent orchestration frameworks
Develop and optimize prompt engineering strategies for reliability, evaluation, and performance tuning
Implement LLM interaction workflows including RAG, tool usage, and agent-to-agent communication
Design and integrate Knowledge Graphs to enhance contextual intelligence and reasoning
Develop graph-based solutions using enterprise Neo4j with strong proficiency in Cypher query language and graph algorithms
Implement graph traversal, inference, and graph analytics for AI use cases
Build modular, scalable, and maintainable Python-based AI applications
Develop high-performance REST APIs using Fast / Flask framework
Implement asynchronous processing and orchestration workflows
Ensure security, performance, and observability of AI services
Develop and manage data ingestion and transformation pipelines using Azure Data Factory and Data bricks
Design and implement data lake architectures supporting multiple structured and unstructured data formats
Integrate large-scale enterprise data sources with AI platforms
Work with MongoDB for schema design, indexing strategies, and large-scale semi-structured data handling
Develop distributed data processing solutions using Databricks and PySpark
Deploy and manage AI workloads on cloud platforms (preferably Microsoft Azure AI stack)
Design scalable, highly available, and robust enterprise AI architectures
Lead rapid POC development and transition solutions to production-grade systems
Knowledge about CI/CD pipelines, containerization (Docker), and deployment automation
Define best practices for performance, scalability, and cost optimization
Requirements:
5+ Years
Large Language Models (LLMs) & NLP
RAG, agents, and multi-agent orchestration
Prompt engineering & LLM evaluation frameworks
Vector databases
Python (advanced proficiency)
FastAPI / Flask
Asynchronous & modular application design
Databricks & PySpark
MongoDB (schema design, indexing, optimization)
Strong problem-solving and system design capabilities
Ability to work in a fast-paced, innovation-driven environment
Excellent communication and stakeholder collaboration
Mentoring and technical leadership experience
Nice to have:
Experience with Microsoft AI ecosystem (Azure OpenAI, AI Search, Microsoft Foundry, etc.)
Experience in enterprise-scale AI platform development
Strong understanding of AI system evaluation, monitoring, and governance
Exposure to security and responsible AI practices
Neo4j, Cypher query language, Graph data modeling & graph algorithms