2025 : 10 : 25
Mehdi Hosseinzadeh Aghdam

Mehdi Hosseinzadeh Aghdam

Academic rank: Associate Professor
ORCID: 0000-0002-3922-9991
Education: PhD.
ScopusId: 57194843379
HIndex: 19/00
Faculty: Faculty of Engineering
Address: Department of Computer Engineering, University of Bonab, Bonab, Iran
Phone: 041-37741636

Research

Title
Group attention for collaborative filtering with sequential feedback and context aware attributes
Type
JournalPaper
Keywords
Collaborative Filtering, Attention Mechanism, Recommender Systems, GRU
Year
2025
Journal scientific reports
DOI
Researchers Hadise Vaghari ، Mehdi Hosseinzadeh Aghdam ، Hojjat Emami

Abstract

The deployment of recommender systems has become increasingly widespread, leveraging users’ past behaviors to predict future preferences. Collaborative Filtering (CF) is a foundational method that depends on user-item interactions. However, due to individual variations in rating patterns and dynamic interplays of item attributes, it becomes challenging to model user preferences accurately. Existing attention-based methods often do not prove very reliable in capturing fine-grained intricate item-attribute relationships or in furnishing global explanations across temporal, attribute, and item levels. To overcome these limitations, we propose GCORec, a novel framework that integrates short- and long-term user preferences using innovative mechanisms. A Hierarchical Attention Network returns a highly complicated item-attribute relationship, while a Group-wise enhancement mechanism improves the representation of features by reducing noise while emphasizing important attributes. Likewise, an Attentive Bi-Directional GRU does splendidly when trying to model long-term user behaviors while the Collaborative Multi Head Attention Mechanism evaluates the effect of item attributes on user preferences. Experiments conducted on benchmark datasets demonstrate the advantages of the proposed GCORec. Specifically, GCORec achieves improvements over the best baselines by 3.03% and 1.49% in terms of Recall@20, and by 5.88% and 5.92% in terms of NDCG@20 on real-world datasets with different levels of sparsity and domain features.