elliot.recommender.autoencoders.dae package

Submodules

elliot.recommender.autoencoders.dae.multi_dae module

Module description:

class elliot.recommender.autoencoders.dae.multi_dae.MultiDAE(data, config, params, *args, **kwargs)[source]

Bases: elliot.recommender.recommender_utils_mixin.RecMixin, elliot.recommender.base_recommender_model.BaseRecommenderModel

Collaborative denoising autoencoder

For further details, please refer to the paper

Parameters
  • intermediate_dim – Number of intermediate dimension

  • latent_dim – Number of latent factors

  • reg_lambda – Regularization coefficient

  • lr – Learning rate

  • dropout_pkeep – Dropout probaility

To include the recommendation model, add it to the config file adopting the following pattern:

models:
  MultiDAE:
    meta:
      save_recs: True
    epochs: 10
    batch_size: 512
    intermediate_dim: 600
    latent_dim: 200
    reg_lambda: 0.01
    lr: 0.001
    dropout_pkeep: 1
property name
train()[source]

elliot.recommender.autoencoders.dae.multi_dae_model module

Module description:

class elliot.recommender.autoencoders.dae.multi_dae_model.Decoder(*args, **kwargs)[source]

Bases: tensorflow.python.keras.engine.base_layer.Layer

Converts z, the encoded vector, back into a uaser interaction vector.

call(inputs, **kwargs)[source]
class elliot.recommender.autoencoders.dae.multi_dae_model.DenoisingAutoEncoder(*args, **kwargs)[source]

Bases: tensorflow.python.keras.engine.training.Model

Combines the encoder and decoder into an end-to-end model for training.

call(inputs, training=None, **kwargs)[source]
get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

get_top_k(preds, train_mask, k=100)[source]
predict(inputs, training=False, **kwargs)[source]

Get full predictions on the whole users/items matrix.

Returns

The matrix of predicted values.

train_step(batch)[source]
class elliot.recommender.autoencoders.dae.multi_dae_model.Encoder(*args, **kwargs)[source]

Bases: tensorflow.python.keras.engine.base_layer.Layer

Maps user-item interactions to a triplet (z_mean, z_log_var, z).

call(inputs, training=None)[source]

Module contents

Module description: