2024-12-18
arXiv

SAFERec: Self-Attention and Frequency Enriched Model for Next Basket Recommendation

Aleksandr Milogradskii , Marina Ananyeva , Denis Krasilnikov , Oleg Lashinin
SAFERec, a new algorithm for Next-Basket Recommendation, enhances transformer-based models by incorporating item frequency information, improving their performance on NBR tasks. Experiments show SAFERec outperforms other baselines, with an 8% improvement in Recall@10.
Transformer-based approaches such as BERT4Rec and SASRec demonstrate strong performance in Next Item Recommendation (NIR) tasks. However, applying these architectures to Next-Basket Recommendation (NBR) tasks, which often involve highly repetitive interactions, is challenging due to the vast number of possible item combinations in a basket. Moreover, frequency-based methods such as TIFU-KNN and UP-CF still demonstrate strong performance in NBR tasks, frequently outperforming deep-learning approaches. This paper introduces SAFERec, a novel algorithm for NBR that enhances transformer-based architectures from NIR by incorporating item frequency information, consequently improving their applicability to NBR tasks. Extensive experiments on multiple datasets show that SAFERec outperforms all other baselines, specifically achieving an 8\% improvement in Recall@10.
2024-10-25
arXiv

Knowledge Graph Enhanced Language Agents for Recommendation

Taicheng Guo , Xiangliang Zhang , Chaochun Liu , Hai Wang , Varun Mannam
This paper introduces Knowledge Graph Enhanced Language Agents (KGLA), a framework that integrates knowledge graphs with language agents to improve recommendation systems by enriching user profiles and capturing complex relationships between users and items. The method significantly enhances recommendation performance, as demonstrated by substantial improvements in NDCG@1 on three widely used benchmarks.
Language agents have recently been used to simulate human behavior and user-item interactions for recommendation systems. However, current language agent simulations do not understand the relationships between users and items, leading to inaccurate user profiles and ineffective recommendations. In this work, we explore the utility of Knowledge Graphs (KGs), which contain extensive and reliable relationships between users and items, for recommendation. Our key insight is that the paths in a KG can capture complex relationships between users and items, eliciting the underlying reasons for user preferences and enriching user profiles. Leveraging this insight, we propose Knowledge Graph Enhanced Language Agents(KGLA), a framework that unifies language agents and KG for recommendation systems. In the simulated recommendation scenario, we position the user and item within the KG and integrate KG paths as natural language descriptions into the simulation. This allows language agents to interact with each other and discover sufficient rationale behind their interactions, making the simulation more accurate and aligned with real-world cases, thus improving recommendation performance. Our experimental results show that KGLA significantly improves recommendation performance (with a 33%-95% boost in NDCG@1 among three widely used benchmarks) compared to the previous best baseline method.
2024-10-21
arXiv

STAR: A Simple Training-free Approach for Recommendations using Large Language Models

Adam Kraft , Long Jin , Nikhil Mehta , Taibai Xu , Lichan Hong
This paper introduces STAR, a training-free approach for recommendation systems using large language models (LLMs) that combines semantic embeddings and collaborative user information. The method achieves competitive performance on next-item prediction tasks, demonstrating the potential of LLMs without fine-tuning. Experimental results show significant improvements in Hits@10 on various categories of the Amazon Review dataset.
Recent progress in large language models (LLMs) offers promising new approaches for recommendation system (RecSys) tasks. While the current state-of-the-art methods rely on fine-tuning LLMs to achieve optimal results, this process is costly and introduces significant engineering complexities. Conversely, methods that bypass fine-tuning and use LLMs directly are less resource-intensive but often fail to fully capture both semantic and collaborative information, resulting in sub-optimal performance compared to their fine-tuned counterparts. In this paper, we propose a Simple Training-free Approach for Recommendation (STAR), a framework that utilizes LLMs and can be applied to various recommendation tasks without the need for fine-tuning. Our approach involves a retrieval stage that uses semantic embeddings from LLMs combined with collaborative user information to retrieve candidate items. We then apply an LLM for pairwise ranking to enhance next-item prediction. Experimental results on the Amazon Review dataset show competitive performance for next item prediction, even with our retrieval stage alone. Our full method achieves Hits@10 performance of +23.8% on Beauty, +37.5% on Toys and Games, and -1.8% on Sports and Outdoors relative to the best supervised models. This framework offers an effective alternative to traditional supervised models, highlighting the potential of LLMs in recommendation systems without extensive training or custom architectures.
2024-03-04
arXiv

Wukong: Towards a Scaling Law for Large-Scale Recommendation

Daifeng Guo , Wenlin Chen , Maxim Naumov , Jongsoo Park , Ellie Dingqiao Wen
This paper introduces Wukong, a network architecture based on stacked factorization machines and an upscaling strategy, to establish a scaling law for recommendation models. Wukong effectively captures diverse interactions and outperforms state-of-the-art models in quality and scalability. The results show that Wukong maintains its superiority across a wide range of model complexities.
Scaling laws play an instrumental role in the sustainable improvement in model quality. Unfortunately, recommendation models to date do not exhibit such laws similar to those observed in the domain of large language models, due to the inefficiencies of their upscaling mechanisms. This limitation poses significant challenges in adapting these models to increasingly more complex real-world datasets. In this paper, we propose an effective network architecture based purely on stacked factorization machines, and a synergistic upscaling strategy, collectively dubbed Wukong, to establish a scaling law in the domain of recommendation. Wukong's unique design makes it possible to capture diverse, any-order of interactions simply through taller and wider layers. We conducted extensive evaluations on six public datasets, and our results demonstrate that Wukong consistently outperforms state-of-the-art models quality-wise. Further, we assessed Wukong's scalability on an internal, large-scale dataset. The results show that Wukong retains its superiority in quality over state-of-the-art models, while holding the scaling law across two orders of magnitude in model complexity, extending beyond 100 GFLOP/example, where prior arts fall short.