Visual Models¶
Summary¶
|
Attentive Collaborative Filtering: Multimedia Recommendation with Item- and Component-Level Attention |
|
DeepStyle: Learning User Preferences for Visual Recommendation |
|
Visually-Aware Fashion Recommendation and Design with Generative Image Models |
|
VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback |
|
Visual Neural Personalized Ranking for Image Recommendation |
Adversarial Multimedia Recommender |
ACF¶
-
class
elliot.recommender.visual_recommenders.ACF.ACF.
ACF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Attentive Collaborative Filtering: Multimedia Recommendation with Item- and Component-Level Attention
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
batch_size – Batch size
l_w – Regularization coefficient
layers_component – Tuple with number of units for each attentive layer (component-level)
layers_item – Tuple with number of units for each attentive layer (item-level)
To include the recommendation model, add it to the config file adopting the following pattern:
models: ACF: meta: save_recs: True lr: 0.0005 epochs: 50 factors: 100 batch_size: 128 l_w: 0.000025 layers_component: (64, 1) layers_item: (64, 1)
DeepStyle¶
-
class
elliot.recommender.visual_recommenders.DeepStyle.DeepStyle.
DeepStyle
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
DeepStyle: Learning User Preferences for Visual Recommendation
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
batch_size – Batch size
batch_eval – Batch size for evaluation
l_w – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: DeepStyle: meta: save_recs: True lr: 0.0005 epochs: 50 factors: 100 batch_size: 128 batch_eval: 512 l_w: 0.000025
DVBPR¶
-
class
elliot.recommender.visual_recommenders.DVBPR.DVBPR.
DVBPR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Visually-Aware Fashion Recommendation and Design with Generative Image Models
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
batch_size – Batch size
batch_eval – Batch for evaluation
lambda_1 – Regularization coefficient
lambda_2 – CNN regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: DVBPR: meta: save_recs: True lr: 0.0001 epochs: 50 factors: 100 batch_size: 128 batch_eval: 128 lambda_1: 0.0001 lambda_2: 1.0
VBPR¶
-
class
elliot.recommender.visual_recommenders.VBPR.VBPR.
VBPR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
factors_d – Dimension of visual factors
batch_size – Batch size
batch_eval – Batch for evaluation
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
l_e – Regularization coefficient of projection matrix
To include the recommendation model, add it to the config file adopting the following pattern:
models: VBPR: meta: save_recs: True lr: 0.0005 epochs: 50 factors: 100 factors_d: 20 batch_size: 128 batch_eval: 128 l_w: 0.000025 l_b: 0 l_e: 0.002
VNPR¶
-
class
elliot.recommender.visual_recommenders.VNPR.VNPR.
VNPR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Visual Neural Personalized Ranking for Image Recommendation
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
mf_factors: – Number of latent factors for Matrix Factorization:
mlp_hidden_size – Tuple with number of units for each multi-layer perceptron layer
prob_keep_dropout – Dropout rate for multi-layer perceptron
batch_size – Batch size
batch_eval – Batch for evaluation
l_w – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: VNPR: meta: save_recs: True lr: 0.001 epochs: 50 mf_factors: 10 mlp_hidden_size: (32, 1) prob_keep_dropout: 0.2 batch_size: 64 batch_eval: 64 l_w: 0.001
AMR¶
-
class
elliot.recommender.adversarial.AMR.
AMR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Adversarial Multimedia Recommender
For further details, please refer to the paper
- The model support two adversarial perturbations methods:
FGSM-based presented by X. He et al in paper <https://arxiv.org/pdf/1809.07062.pdf>
MSAP presented by Anelli et al. in paper <https://journals.flvc.org/FLAIRS/article/view/128443>
- Parameters
meta – eval_perturbations: If True Elliot evaluates the effects of both FGSM and MSAP perturbations for each validation epoch
factors – Number of latent factor
factors_d – Image-feature dimensionality
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
l_e – Regularization coefficient of image matrix embedding
eps – Perturbation Budget
l_adv – Adversarial regularization coefficient
adversarial_epochs – Adversarial epochs
eps_iter – Size of perturbations in MSAP perturbations
nb_iter – Number of Iterations in MSAP perturbations
To include the recommendation model, add it to the config file adopting the following pattern:
models: AMR: meta: save_recs: True eval_perturbations: True epochs: 10 batch_size: 512 factors: 200 factors_d: 20 lr: 0.001 l_w: 0.1 l_b: 0.001 l_e: 0.1 eps: 0.1 l_adv: 0.001 adversarial_epochs: 5 eps_iter: 0.00001 nb_iter: 20 nb_iter: 20 eps_iter: 0.00001 # If not specified = 2.5*eps/nb_iter