Dual Similarity Learning for Heterogeneous One-Class Collaborative Filtering Xiancong Chen , Weike Pan, Zhong Ming National Engineering Laboratory for Big Data System Computing Technology, Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China, chenxiancong@email.szu.edu.cn, { panweike,mingz } @szu.edu.cn Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 1 / 21
Introduction Problem Definition Problem: In this paper, we study the heterogeneous one-class collaborative filtering (HOCCF) problem. Input: For a user u ∈ U , we have a set of purchased items, i.e., I P u , and a set of examined items, i.e., I E u . Goal: Our goal is to exploit such two types of one-class feedback and recommend a ranked list of items for each user u . Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 2 / 21
Introduction Challenges The sparsity of target feedback. 1 The ambiguity and noise of auxiliary feedback. 2 Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 3 / 21
Introduction Overall of Our Solution Target Item s s i'i ji Dual Similarity Target User s s u'u wu Dual similarity learning model (DSLM) : Learn the similarity s i ′ i between a target item i and a purchased item i ′ , and the similarity s ji between a target item i and an examined item j . Learn the similarity s u ′ u between a target user u and a user u ′ who purchased item i , and the similarity s wu between a target item u and a user w who examined item i . Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 4 / 21
Introduction Advantages of Our Solution By introducing the auxiliary feedback, DSLM is able to alleviate the sparsity problem to some extent. DSLM learns not only the similarity among items, but also the similarity among users, which is useful to capture the correlations between users and items. DSLM strikes a good balance between the item-based similarity and user-based similarity. Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 5 / 21
Introduction Notations Notation Explanation n number of users m number of items u , u ′ , w ∈ { 1 , 2 , . . . , n } user ID i , i ′ , j ∈ { 1 , 2 , . . . , m } item ID U = { u } , |U| = n the whole set of users I = { i } , |I| = m the whole set of items R P = { ( u , i ) } the whole set of purchases R E = { ( u , i ) } the whole set of examinations R A = { ( u , i ) } the set of absent pairs = { i | ( u , i ) ∈ R P } I P the set of purchased items w.r.t. u u I E u = { i | ( u , i ) ∈ R E } the set of examined items w.r.t. u = { u | ( u , i ) ∈ R P } U P the set of users that have purchased item i i U E = { u | ( u , i ) ∈ R E } the set of users that have examined item i i U u · , P u ′· , E w · ∈ R 1 × d user’s latent vectors E j · ∈ R 1 × d V i · , ˜ P i ′ , ˜ item’s latent vectors b u , b i ∈ R user bias and item bias d latent feature number ˆ r ui predicted preference of user u on item i ρ sampling parameter γ learning rate T , L , L 0 iteration number λ ∗ , α ∗ , β ∗ tradeoff parameters Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 6 / 21
Background Factored Item Similarity Model (FISM) In FISM, we can estimate the preference of user u towards item i by aggregating the similarity between item i and all of its neighbors (i.e., I P u \{ i } ), which is shown as follows, 1 � ˜ P i ′ · V T ˆ r ui = (1) i · , � |I P u \{ i }| i ′ ∈I P u \{ i } u \{ i } ˜ 1 √ where we can regard the term � P i ′ · as a certain i ′ ∈I P |I P u \{ i }| virtual user profile w.r.t. the target feedback, denoting the distinct preference of user u . Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 7 / 21
Background Transfer via Joint Similarity Learning (TJSL) TJSL introduces a new similarity term in a similar way to that of FISM, through which the knowledge from the auxiliary feedback (e.g., examination actions) can be transferred. Then, the preference estimation of user u towards item i is as follows, s ji , I E ( ℓ ) � � ⊆ I E s i ′ i + u , (2) u i ′ ∈I P j ∈I E ( ℓ ) u \{ i } u ˜ 1 E j · V T where � � s ji = i · , and the term j ∈I E ( ℓ ) j ∈I E ( ℓ ) � |I E ( ℓ ) u u | u ˜ 1 � E j · can be regarded as a virtual user profile w.r.t. the j ∈I E ( ℓ ) � |I E ( ℓ ) u | u auxiliary feedback. Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 8 / 21
Method Dual Similarity Learning Model (DSLM) In TJSL, two similarity among items are learned. Symmetrically, we define the similarity among users as follows, s wu , U E ( ℓ ) � � ⊆ U E s u ′ u + i , (3) i u ′ ∈U P w ∈U E ( ℓ ) i \{ u } i 1 √ i \{ u } P u ′ · U T where � i \{ u } s u ′ u = � u · , u ′ ∈U P u ′ ∈U P |U P i \{ u }| 1 E w · U T � � s wu = u · . Intuitively, we can also regard w ∈U E ( ℓ ) w ∈U E ( ℓ ) � |U E ( ℓ ) | i i i 1 1 √ the term � i \{ u } P u ′ · and � E w · as the u ′ ∈U P w ∈U E ( ℓ ) |U P � i \{ u }| |U E ( ℓ ) | i i virtual item profiles w.r.t. the target feedback and auxiliary feedback, respectively. Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 9 / 21
Method Prediction Rule The predicted preference of user u on item i , r ( ℓ ) ˆ ui = � s i ′ i + � s ji + b u + b i + i ′ ∈I P j ∈I E ( ℓ ) u \{ i } u (4) s wu , I E ( ℓ ) u , U E ( ℓ ) ⊆ I E ⊆ U E � s u ′ u + � i . u i u ′ ∈U P w ∈U E ( ℓ ) i \{ u } i where I E ( ℓ ) u , U E ( ℓ ) is the set of likely-to-prefer items selected from I E is u i the set of potential users that are likely to purchase item i selected from U E i . Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 10 / 21
Method Objective Function The objective function of DSLM is as follows, f ( ℓ ) � min (5) ui Θ ( ℓ ) , I E ( ℓ ) u , U E ( ℓ ) ⊆I E ⊆U E ( u , i ) ∈R P ∪R A u i i where f ( ℓ ) r ( ℓ ) ui ) 2 + λ u F + λ p = 1 2 || U u · || 2 i \{ u } || P u ′ · || 2 2 ( r ui − ˆ � F + u ′ ∈U P ui 2 F + α p u \{ i } || ˜ λ e F + β u u + α v || E w · || 2 2 b 2 2 || V i · || 2 P i ′ · || 2 � � F + i ′ ∈I P w ∈U E ( ℓ ) 2 2 i || ˜ F + β v α e E j · || 2 2 b 2 � i , and the model parameters are j ∈I E ( ℓ ) 2 u Θ ( ℓ ) = { U u · , P u ′ · , E w · , V i · , ˜ E j · , b u , b i } . Note that R A is the set of P i ′ · , ˜ negative feedback used to complement the target feedback, where r ui = 1 if ( u , i ) ∈ R P and r ui = 0 otherwise. Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 11 / 21
Method Gradients (1/2) To learn the parameters Θ ( ℓ ) , we use the stochastic gradient decent (SGD) algorithm and have the gradients of the model parameters for a randomly sampled pair ( u , i ) ∈ R P ∪ R A , 1 1 � � ∇ U u · = − e ui P u ′ · − e ui E w · + λ u U u · , � � |U P |U E ( ℓ ) i \{ u }| | u ′ ∈U P i \{ u } w ∈U E ( ℓ ) i i (6) 1 1 ˜ ˜ � � ∇ V i · = − e ui P i ′ · − e ui E j · + α v V i · , � |I P � u \{ i }| |I E ( ℓ ) | i ′ ∈I P j ∈I E ( ℓ ) u \{ i } u u (7) 1 U u · + λ p P u ′ · , u ′ ∈ U P ∇ P u ′ · = − e ui i \{ u } , (8) � |U P i \{ u }| Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 12 / 21
Method Gradients (2/2) 1 U u · + λ e E w · , w ∈ U E ( ℓ ) ∇ E w · = − e ui , (9) i � |U E ( ℓ ) | i 1 P i ′ · , i ′ ∈ I P ∇ ˜ V i · + α p ˜ P i ′ · = − e ui u \{ i } , (10) � |I P u \{ i }| 1 E j · , j ∈ I E ( ℓ ) ∇ ˜ V i · + α e ˜ E j · = − e ui (11) , u � |I E ( ℓ ) | u ∇ b u = − e ui + β u b u , ∇ b i = − e ui + β v b i , (12) where e ui = r ui − ˆ r ui is the difference between the true preference and the predicted preference. Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 13 / 21
Method Update Rules We have the update rules, θ ( ℓ ) ← θ ( ℓ ) − γ ∇ θ ( ℓ ) , (13) where γ is the learning rate, and θ ( ℓ ) ∈ Θ ( ℓ ) is a model parameter to be learned. Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 14 / 21
Method Identification of I E ( ℓ ) and U E ( ℓ ) u i Note that we identify U E ( ℓ ) and I E ( ℓ ) by the following way, u i For each user u ∈ U E i , we estimate the preference for the target r ( ℓ ) ui , and take τ |U E item i , i.e., ˆ i | ( τ ∈ ( 0 , 1 ]) users with the highest scores as the potential users that are likely to purchase the target item i . r ( ℓ ) For each j ∈ I E u , similarly, we estimate the preference ˆ uj , and take τ |I E u | items with the highest scores as the candidate items. Finally, we save the model and data of the last L 0 epochs. The r ( ℓ ) estimated preference is the average value of ˆ ui , where ℓ ranges from L − L 0 + 1 to L . Chen, Pan and Ming (SZU) DSLM IEEE BigComp 2020 15 / 21
Recommend
More recommend