Description
Current AI agents in companies rarely keep a structured memory of past interactions, but imagine if they did Personalized, context-aware agents could adapt to user preferences in real-time: a user changing roles might want different recommendations, or prefer more detailed answers than before.
In this thesis, you'll explore memory-enabled multi-agent LLMs. You'll work with a central agent managing memory, supported by secondary agents performing RAG on ELCA's knowledge base. You'll investigate challenges like dynamic memory adaptation, user privacy, and control over stored information, all in a local, secure setup.
The goal: demonstrate practical value and show that a memory-driven agent can improve real-world business applications. You'll gain hands-on experience in a cutting-edge research area, helping expand conversational AI capabilities using proprietary data and multi-agent architectures.
Objectives
* Explore and deploy LLM memory frameworks (e.g., Mem0) and design a program to make it practical for business
* Build a proof-of-concept agentic application and evaluate its usefulness
* Gain hands-on experience with current chatbot infrastructures
Our offer
* A collaborative, international, and tech-driven environment
* Real impact: contribute to innovative AI solutions
* Internal tech events (hackathons, brownbags) and our technical blog
* Monthly after-work events across locations
Skills required
* Experience with ML and NLP, familiarity with LLM
* Strong Python skills (Pandas, PyTorch, …) and software engineering fundamentals
* Web development skills are a plus (React, Streamlit, …)
* Strong communication skills in French and English, able to explain complex ideas clearly
Additional information
* Thesis starting in February 2026
* Applications must include your most recent academic transcripts
* Candidates must be completing a Master's degree and enrolled in a higher-education program with a valid internship agreement (convention de stage)