Reinforcing Long-Term Performance in Recommender Systems with User-Oriented Exploration Policy

Changshuo Zhang*, Sirui Chen*, Xiao Zhang, Sunhao Dai, Weijie Yu, Jun Xu

Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR),

PDF BibTeX

Cite this paper

@inproceedings{10.1145/3626772.3657714,
author = {Zhang, Changshuo and Chen, Sirui and Zhang, Xiao and Dai, Sunhao and Yu, Weijie and Xu, Jun},
title = {Reinforcing Long-Term Performance in Recommender Systems with User-Oriented Exploration Policy},
year = {2024},
booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval},
}
Code DOI

Abstract:

Reinforcement learning (RL) has gained traction for enhancing user long-term experiences in recommender systems by effectively exploring users' interests. However, modern recommender systems exhibit distinct user behavioral patterns among tens of millions of items, which increases the difficulty of exploration. For example, user behaviors with different activity levels require varying intensity of exploration, while previous studies often overlook this aspect and apply a uniform exploration strategy to all users, which ultimately hurts user experiences in the long run. To address these challenges, we propose User-Oriented Exploration Policy (UOEP), a novel approach facilitating fine-grained exploration among user groups. We first construct a distributional critic which allows policy optimization under varying quantile levels of cumulative reward feedbacks from users, representing user groups with varying activity levels. Guided by this critic, we devise a population of distinct actors aimed at effective and fine-grained exploration within its respective user group. To simultaneously enhance diversity and stability during the exploration process, we further introduce a population-level diversity regularization term and a supervision module. Experimental results on public recommendation datasets demonstrate that our approach outperforms all other baselines in terms of long-term performance, validating its user-oriented exploration effectiveness. Meanwhile, further analyses reveal our approach's benefits of improved performance for low-activity users as well as increased fairness among users.