https://papers.cool/arxiv/2403.01744
Authors: Chao Zhang ; Shiwei Wu ; Haoxin Zhang ; Tong Xu ; Yan Gao ; Yao Hu ; Enhong Chen
Summary: People enjoy sharing "notes" including their experiences within online communities. Therefore, recommending notes aligned with user interests has become a crucial task. Existing online methods only input notes into BERT-based models to generate note embeddings for assessing similarity. However, they may underutilize some important cues, e.g., hashtags or categories, which represent the key concepts of notes. Indeed, learning to generate hashtags/categories can potentially enhance note embeddings, both of which compress key note information into limited content. Besides, Large Language Models (LLMs) have significantly outperformed BERT in understanding natural languages. It is promising to introduce LLMs into note recommendation. In this paper, we propose a novel unified framework called NoteLLM, which leverages LLMs to address the item-to-item (I2I) note recommendation. Specifically, we utilize Note Compression Prompt to compress a note into a single special token, and further learn the potentially related notes' embeddings via a contrastive learning approach. Moreover, we use NoteLLM to summarize the note and generate the hashtag/category automatically through instruction tuning. Extensive validations on real scenarios demonstrate the effectiveness of our proposed method compared with the online baseline and show major improvements in the recommendation system of Xiaohongshu.



Q: 这篇论文试图解决什么问题?
A: 这篇论文提出了一个名为NoteLLM的可检索大型语言模型(Large Language Model),旨在解决在线社区中与用户兴趣相匹配的笔记(notes)推荐问题。具体来说,它试图解决以下几个关键问题:
总结来说,NoteLLM框架通过结合LLMs的强大语言理解能力和多任务学习策略,旨在提升笔记推荐系统的准确性和用户体验。
Q: 有哪些相关研究?
A: 在这篇论文中,作者提到了与NoteLLM相关的几个研究领域和具体工作,主要包括: