This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Student Exploration and Experience Development (SEED) is a 12-week internship opportunity at Veolia for students to gain hands-on experience in sustainability and ecological transformation. They will work on real-world projects, receive mentorship from industry professionals, and participate in workshops and networking events. The program aims to nurture talent, promote innovation, and foster meaningful connections between students and industry professionals. Overall, the SEED program provides students with the skills, knowledge, and connections needed to make a positive impact in the industry.
Job Responsibility:
Support the development and implementation of an AI-powered deep research agent
Gain hands-on experience with cutting-edge large language models, cloud infrastructure, and enterprise software development
Work on real-world projects
Receive mentorship from industry professionals
Participate in workshops and networking events
Requirements:
Working towards a PhD degree in AI/ML/Computer Science
3.8 Cumulative G.P.A required
Strong communication skills, including written, verbal, listening, presentation and facilitation skills
Demonstrated ability to build collaborative relationships
Understanding and working with commercial/proprietary LLMs such as Gemini( Google), GPT(OpenAI) and Claude Sonnet (Anthropic)for high performance, large context, and multimodal tasks
Familiarity with open-source/self-hosted LLMs like Llama from Meta and Mixtral from (Mistral AI)
Requirements Gathering: Using Confluence for documentation and collaboration
Architecture Design: Creating system diagrams and workflows with Lucidchart
Prototyping: Designing UI/UX prototypes in Figma
Project Management: Tracking tasks and progress in Jira
Data Preparation & Management: Cleaning, transforming, and organizing data for use in AI/ML workflows
Core LLM Frameworks: Using LangChain or LlamaIndex for orchestrating LLM applications
Agent Frameworks: Building multi-agent systems with Semantic Kernel, CrewAI, and LangGraph
Prompt Management: Managing and optimizing prompts with LangSmith
Vector Databases & Search: Implementing semantic search and retrieval using Vertex AI Vector DBs
API Framework: Developing RESTful APIs with FastAPI (Python)
Message Queue: Integrating asynchronous communication with Apache Kafka and Redis Streams
Web Framework: Building user interfaces with React or Angular
UI Components: Utilizing Material-UI for consistent, modern UI elements
IDE: Using Google AI Studio for AI application development
IDE: Writing and debugging code in VS Code
AI Assistants: Leveraging GitHub Copilot and Cursor for code suggestions and productivity
Version Control: Managing code with GitHub, or GitLab
Code Quality: Ensuring code quality and standards with SonarQube, ESLint, and Pylint
Fine-tuning Platforms: Using Vertex AI Tuning for model customization
Training Frameworks: Training and experimenting with models in PyTorch, TensorFlow, or JAX
Efficient Training: Applying parameter-efficient fine-tuning (PEFT) methods like LoRA and QLoRA
Synthetic Data: Generating synthetic data
Evaluation: Assessing models with HELM, lm-evaluation-harness, and custom benchmarks
LLM-Specific Testing: Using RAGAS, and DeepEval for LLM evaluation
LangSmith Evaluators for prompt testing
hallucination detection
Containerization: Packaging applications with Docker
Orchestration: Managing containers at scale with Kubernetes and Google GKE
Using Google Cloud Platform (GCP) services such as Vertex AI for ML, GKE for Kubernetes, Cloud Run for serverless deployment, and Cloud Functions for event-driven tasks
LLM Observability: Monitoring LLM performance and usage with LangSmith and Weights & Biases
Cost Tracking: Monitoring and optimizing costs with OpenMeter and custom dashboards
Quality Monitoring: Setting up continuous evaluation pipelines to ensure model quality and reliability