elliot.recommender.gan.CFGAN package

Submodules

elliot.recommender.gan.CFGAN.cfgan module

Module description:

class elliot.recommender.gan.CFGAN.cfgan.CFGAN(data, config, params, *args, **kwargs)[source]

Bases: elliot.recommender.recommender_utils_mixin.RecMixin, elliot.recommender.base_recommender_model.BaseRecommenderModel

CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks

For further details, please refer to the paper

Parameters
  • factors – Number of latent factor

  • lr – Learning rate

  • l_w – Regularization coefficient

  • l_b – Regularization coefficient of bias

  • l_gan – Adversarial regularization coefficient

  • g_epochs – Number of epochs to train the generator for each IRGAN step

  • d_epochs – Number of epochs to train the discriminator for each IRGAN step

  • s_zr – Sampling parameter of zero-reconstruction

  • s_pm – Sampling parameter of partial-masking

To include the recommendation model, add it to the config file adopting the following pattern:

models:
  CFGAN:
    meta:
      save_recs: True
    epochs: 10
    batch_size: 512
    factors: 10
    lr: 0.001
    l_w: 0.1
    l_b: 0.001
    l_gan: 0.001
    g_epochs: 5
    d_epochs: 1
    s_zr: 0.001
    s_pm: 0.001
get_recommendations(k: int = 100)[source]
property name
train()[source]

elliot.recommender.gan.CFGAN.cfgan_model module

Module description:

class elliot.recommender.gan.CFGAN.cfgan_model.CFGAN_model(*args, **kwargs)[source]

Bases: tensorflow.python.keras.engine.training.Model

get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns

Python dictionary.

get_top_k(predictions, train_mask, k=100)[source]
predict(start, stop, **kwargs)[source]

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for performance in large scale inputs. For small amount of inputs that fit in one batch, directly using __call__ is recommended for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behaves differently during inference. Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Parameters
  • x

    Input samples. It could be: - A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

    • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

    • A tf.data dataset.

    • A generator or keras.utils.Sequence instance.

    A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

  • batch_size – Integer or None. Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

  • verbose – Verbosity mode, 0 or 1.

  • steps – Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict will run until the input dataset is exhausted.

  • callbacks – List of keras.callbacks.Callback instances. List of callbacks to apply during prediction. See [callbacks](/api_docs/python/tf/keras/callbacks).

  • max_queue_size – Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

  • workers – Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread.

  • use_multiprocessing – Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns

Numpy array(s) of predictions.

Raises
  • RuntimeError – If model.predict is wrapped in tf.function.

  • ValueError – In case of mismatch between the provided input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

train_step(batch)[source]

The logic for one training step.

This method can be overridden to support custom training logic. This method is called by Model.make_train_function.

This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Parameters

data – A nested structure of `Tensor`s.

Returns

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

class elliot.recommender.gan.CFGAN.cfgan_model.Discriminator(*args, **kwargs)[source]

Bases: tensorflow.python.keras.engine.training.Model

discriminate_fake_data(X)[source]
train_step(batch)[source]

The logic for one training step.

This method can be overridden to support custom training logic. This method is called by Model.make_train_function.

This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Parameters

data – A nested structure of `Tensor`s.

Returns

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

class elliot.recommender.gan.CFGAN.cfgan_model.Generator(*args, **kwargs)[source]

Bases: tensorflow.python.keras.engine.training.Model

generate_fake_data(mask, C_u)[source]
infer(C_u)[source]
train_step(batch)[source]

The logic for one training step.

This method can be overridden to support custom training logic. This method is called by Model.make_train_function.

This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Parameters

data – A nested structure of `Tensor`s.

Returns

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

Module contents