Latest Research Papers
2024-10-21
arXiv
STAR: A Simple Training-free Approach for Recommendations using Large Language Models
This paper introduces STAR, a training-free approach for recommendation systems using large language models (LLMs) that combines semantic embeddings and collaborative user information. The method achieves competitive performance on next-item prediction tasks, demonstrating the potential of LLMs without fine-tuning. Experimental results show significant improvements in Hits@10 on various categories of the Amazon Review dataset.
Recent progress in large language models (LLMs) offers promising new
approaches for recommendation system (RecSys) tasks. While the current
state-of-the-art methods rely on fine-tuning LLMs to achieve optimal results,
this process is costly and introduces significant engineering complexities.
Conversely, methods that bypass fine-tuning and use LLMs directly are less
resource-intensive but often fail to fully capture both semantic and
collaborative information, resulting in sub-optimal performance compared to
their fine-tuned counterparts. In this paper, we propose a Simple Training-free
Approach for Recommendation (STAR), a framework that utilizes LLMs and can be
applied to various recommendation tasks without the need for fine-tuning. Our
approach involves a retrieval stage that uses semantic embeddings from LLMs
combined with collaborative user information to retrieve candidate items. We
then apply an LLM for pairwise ranking to enhance next-item prediction.
Experimental results on the Amazon Review dataset show competitive performance
for next item prediction, even with our retrieval stage alone. Our full method
achieves Hits@10 performance of +23.8% on Beauty, +37.5% on Toys and Games, and
-1.8% on Sports and Outdoors relative to the best supervised models. This
framework offers an effective alternative to traditional supervised models,
highlighting the potential of LLMs in recommendation systems without extensive
training or custom architectures.