Reinforcement Re-ranking with 2D Grid-based Recommendation Panels

Sirui Chen, Xiao Zhang, Xu Chen, Zhiyu Li, Yuan Wang, Quan Lin, Jun Xu

Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region (SIGIR-AP),

PDF BibTeX

Cite this paper

@inproceedings{10.1145/3580305.3599796,
author = {Chen, Sirui and Wang, Yuan and Wen, Zijing and Li, Zhiyu and Zhang, Changshuo and Zhang, Xiao and Lin, Quan and Zhu, Cheng and Xu, Jun},
title = {Controllable Multi-Objective Re-ranking with Policy Hypernetworks},
year = {2023},
booktitle = {Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
}
DOI

Abstract:

Modern recommender systems usually present items as a streaming, one-dimensional ranking list. Recently there is a trend in e-commerce that the recommended items are organized grid-based panels with two dimensions where users can view the items in both vertical and horizontal directions. Presenting items in grid-based result panels poses new challenges to recommender systems because existing models are all designed to output sequential lists while the slots in a grid-based panel have no explicit order. Directly converting the item rankings into grids (e.g., pre-defining an order on the slots) overlooks the user-specific behavioral patterns on grid-based panels and inevitably hurts the user experiences. To address this issue, we propose a novel Markov decision process (MDP) to place the items in 2D grid-based result panels at the final re-ranking stage of the recommender systems. The model, referred to as Panel-MDP, takes an initial item ranking from the early stages as the input. Then, it defines the MDP discrete time steps as the ranks in the initial ranking list, and the actions as the prediction of the user-item preference and the selection of the slots. At each time step, Panel-MDP sequentially executes two sub-actions: first deciding whether the current item in the initial ranking list is preferred by the user; then selecting a slot for placing the item if preferred, or skipping the item otherwise. The process is continued until all of the panel slots are filled. The reinforcement learning algorithm of PPO is employed to implement and learn the parameters in the Panel-MDP. Simulation and experiments on a dataset collected from a widely-used e-commerce app demonstrated the superiority of Panel-MDP in terms of recommending 2D grid-based result panels.