This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Build and deploy LLM-powered applications using commercial and open-source models
Design and implement Retrieval Augmented Generation (RAG) pipelines
Create AI agents that reason over structured (graph) and unstructured data
Define and implement tool-use patterns for agents (function calling, API tools, memory tools)
Apply strong prompt engineering techniques for reliability, safety, and performance
Design and manage knowledge graphs using Neo4j
Model complex relationships and optimize graph queries (Cypher)
Implement semantic search using vector databases
Apply NLP concepts such as embeddings, entity extraction, classification, summarization, and similarity search
Set up and operate Model Context Protocol (MCP) servers
Register datasets, APIs, and tools for secure LLM access
Enable scalable, observable, and governed LLM interactions
Build AI microservices using Python (FastAPI / Flask)
Integrate and consume internal and external APIs within AI workflows
Develop pipelines for ingestion, embedding generation, indexing, and retrieval
Evaluate LLM and NLP models using qualitative and quantitative techniques
Measure and monitor performance (accuracy, latency, cost, hallucinations)
Continuously iterate on prompts, retrieval strategies, and model configurations
Requirements:
6+ years of AI Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education
Nice to have:
Experience with LangChain, LlamaIndex, Semantic Kernel, or similar frameworks
Cloud experience (Azure preferred
AWS/GCP welcome)
Experience with Docker and Kubernetes
Knowledge of knowledge graphs, ontologies, or semantic systems
Exposure to AI governance, security, or responsible AI practices