elliot.recommender.adversarial.AMR package¶
Submodules¶
elliot.recommender.adversarial.AMR.AMR module¶
Module description:
-
class
elliot.recommender.adversarial.AMR.AMR.
AMR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Adversarial Multimedia Recommender
For further details, please refer to the paper
- The model support two adversarial perturbations methods:
FGSM-based presented by X. He et al in paper <https://arxiv.org/pdf/1809.07062.pdf>
MSAP presented by Anelli et al. in paper <https://journals.flvc.org/FLAIRS/article/view/128443>
- Parameters
meta – eval_perturbations: If True Elliot evaluates the effects of both FGSM and MSAP perturbations for each validation epoch
factors – Number of latent factor
factors_d – Image-feature dimensionality
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
l_e – Regularization coefficient of image matrix embedding
eps – Perturbation Budget
l_adv – Adversarial regularization coefficient
adversarial_epochs – Adversarial epochs
eps_iter – Size of perturbations in MSAP perturbations
nb_iter – Number of Iterations in MSAP perturbations
To include the recommendation model, add it to the config file adopting the following pattern:
models: AMR: meta: save_recs: True eval_perturbations: True epochs: 10 batch_size: 512 factors: 200 factors_d: 20 lr: 0.001 l_w: 0.1 l_b: 0.001 l_e: 0.1 eps: 0.1 l_adv: 0.001 adversarial_epochs: 5 eps_iter: 0.00001 nb_iter: 20 nb_iter: 20 eps_iter: 0.00001 # If not specified = 2.5*eps/nb_iter
-
property
name
¶
elliot.recommender.adversarial.AMR.AMR_model module¶
Module description:
-
class
elliot.recommender.adversarial.AMR.AMR_model.
AMR_model
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
build_msap_perturbation
(batch, eps_iter, nb_iter, delta_f=None)[source]¶ Evaluate Adversarial Perturbation with MSAP https://journals.flvc.org/FLAIRS/article/view/128443
-
build_perturbation
(batch, delta_f=None)[source]¶ Evaluate Adversarial Perturbation with FGSM-like Approach
-
call
(inputs, adversarial=False, training=None)[source]¶ Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Parameters
inputs – A tensor or list of tensors.
training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.
mask – A mask or list of masks. A mask can be either a tensor or None (no mask).
- Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
train_step
(batch, use_adv_train=False)[source]¶ The logic for one training step.
This method can be overridden to support custom training logic. This method is called by Model.make_train_function.
This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
- Parameters
data – A nested structure of `Tensor`s.
- Returns
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
-
Module contents¶
Module description: