Skip to content

This is repository provides the official implementation of the paper: Evoking User Memory: Personalizing LLM via Recollection-Familiarity Adaptive Retrieval

Notifications You must be signed in to change notification settings

Applied-Machine-Learning-Lab/ICLR2026_RF-Mem

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Evoking User Memory: Personalizing LLM via Recollection-Familiarity Adaptive Retrieval

This repository contains the code and resources for the ICLR 2026 paper titled "Evoking User Memory: Personalizing LLM via Recollection-Familiarity Adaptive Retrieval".

The origin code of submission can be found here: https://openreview.net/attachment?id=f7p0F2X6XN&name=supplementary_material.

🧠 RF-Mem

RF-Mem (Recollection–Familiarity Memory) is an adaptive retrieval framework for Large Language Models, grounded in the dual-process theory of human cognition.
In cognitive science, memory is often described as a combination of two systems:

  • Familiarity – fast, effortless recognition that something has been seen before.
  • Recollection – slower, deliberate reconstruction of episodic details such as time, place, and context.

This theory motivates RF-Mem: instead of treating retrieval as either one-shot search or full-context prompting, RF-Mem adaptively balances the two.

Intro

πŸ“Š Framework Overview

The RF-Mem framework first measures retrieval uncertainty using mean similarity and entropy.

  • If ths system feel familiarity, it selects the Familiarity path for direct retrieval.
  • If this system feel unfamiliar, it switches to the Recollection path, where memory items are clustered and iteratively expanded with query reformulation.

This dual-path design enables scalable and efficient personalized memory retrieval, improving both accuracy and efficiency across long-term memory benchmarks such as PersonaBench, PersonaMem, and LongMemEval.

Framework


πŸ“‚ Project Structure

RF_mem
β”œβ”€β”€LongMemEval/.                # LongMemEval experiment data and scripts
β”‚   β”œβ”€β”€  assets/                # Figures of this paper
β”‚   β”œβ”€β”€  data/                  # Shared data directory
β”‚   β”œβ”€β”€  LICENSE
β”‚   β”œβ”€β”€  README.md              # Project description
β”‚   β”œβ”€β”€  requirements-full.txt  # Full dependencies
β”‚   β”œβ”€β”€  requirements-lite.txt  # Lightweight dependencies
β”‚   β”œβ”€β”€  run.sh                 # Quick start script
β”‚
β”œβ”€β”€ personabench_data/          # PersonaBench experiment data and scripts
β”‚   β”œβ”€β”€ eval_data/              # Evaluation data
β”‚   β”œβ”€β”€ llm_model/              # LLM model wrapper
β”‚   β”œβ”€β”€ retri_mdoel/            # Retrieval model
β”‚   β”œβ”€β”€ eval.py                 # PersonaBench evaluation entry
β”‚   β”œβ”€β”€ main_batch.py           # Batch experiment entry
β”‚   β”œβ”€β”€ run.sh                  # Quick run script
β”‚   └── utils.py                # Utility functions
β”‚
└── personamem_data/            # PersonaMem experiment data and scripts
    β”œβ”€β”€ apitokens/              # API key configuration
    β”œβ”€β”€ data/                   # Data directory
    β”œβ”€β”€ llm_model/              # LLM model wrapper
    β”œβ”€β”€ retri_mdoel/            # Retrieval model
    β”œβ”€β”€ main_batch.py           # PersonaMem batch experiment entry
    β”œβ”€β”€ run.sh                  # Quick run script
    └── utils.py                # Utility functions

πŸ”§ Installation

It is recommended to use Python 3.10+.

pip install -r requirements.txt

πŸš€ Usage

  1. Run LongMemEval experiments

    cd LongMemEval
    bash run.sh
  2. Run PersonaBench experiments

    cd personabench_data
    bash run.sh
  3. Run PersonaMem experiments

    cd personamem_data
    bash run.sh

πŸ“œ License

This project is licensed under the MIT License.


✨ Acknowledgements

  • Authors of LongMemEval, PersonaBench & PersonaMem datasets
  • OpenAI / HuggingFace model community

About

This is repository provides the official implementation of the paper: Evoking User Memory: Personalizing LLM via Recollection-Familiarity Adaptive Retrieval

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.1%
  • Shell 2.9%