Welcome to Elliot’s documentation!¶
Elliot is a comprehensive recommendation framework that analyzes the recommendation problem from the researcher’s perspective. It conducts a whole experiment, from dataset loading to results gathering. The core idea is to feed the system with a simple and straightforward configuration file that drives the framework through the experimental setting choices. Elliot untangles the complexity of combining splitting strategies, hyperparameter model optimization, model training, and the generation of reports of the experimental results.

system schema¶
The framework loads, filters, and splits the data considering a vast set of strategies (splitting methods and filtering approaches, from temporal training-test splitting to nested K-folds Cross-Validation). Elliot optimizes hyperparameters for several recommendation algorithms, selects the best models, compares them with the baselines providing intra-model statistics, computes metrics spanning from accuracy to beyond-accuracy, bias, and fairness, and conducts statistical analysis (Wilcoxon and Paired t-test).
Elliot aims to keep the entire experiment reproducible and put the user in control of the framework.
Introduction¶
Elliot is a comprehensive recommendation framework that analyzes the recommendation problem from the researcher’s perspective. It conducts a whole experiment, from dataset loading to results gathering. The core idea is to feed the system with a simple and straightforward configuration file that drives the framework through the experimental setting choices. Elliot untangles the complexity of combining splitting strategies, hyperparameter model optimization, model training, and the generation of reports of the experimental results.

system schema¶
The framework loads, filters, and splits the data considering a vast set of strategies (splitting methods and filtering approaches, from temporal training-test splitting to nested K-folds Cross-Validation). Elliot optimizes hyperparameters for several recommendation algorithms, selects the best models, compares them with the baselines providing intra-model statistics, computes metrics spanning from accuracy to beyond-accuracy, bias, and fairness, and conducts statistical analysis (Wilcoxon and Paired t-test).
Elliot aims to keep the entire experiment reproducible and put the user in control of the framework.
For all the details about Elliot, please refer to the paper and cite [Elliot]
- Elliot
Vito Walter Anelli and Alejandro Bellogín and Antonio Ferrara and Daniele Malitesta and Felice Antonio Merra and Claudio Pomo and Francesco Maria Donini and Tommaso Di Noia. 2021. Elliot: a Comprehensive and Rigorous Framework for Reproducible Recommender Systems Evaluation. Under review. arXiv:2103.02590 [cs.IR].
Install Elliot¶
Elliot works with the following operating systems:
Linux
Windows 10
macOS X
Elliot requires Python version 3.6 or later.
Elliot requires tensorflow version 2.3.2 or later. If you want to use Elliot with GPU, please ensure that CUDA or cudatoolkit version is 7.6 or later. This requires NVIDIA driver version >= 10.1 (for Linux and Windows10).
Please refer to this document for further working configurations.
Install from source¶
CONDA¶
git clone https://github.com//sisinflab/elliot.git && cd elliot
conda create --name elliot_env python=3.8
conda activate
pip install --upgrade pip
pip install -e . --verbose
VIRTUALENV¶
git clone https://github.com//sisinflab/elliot.git && cd elliot
virtualenv -p /usr/bin/pyhton3.6 venv # your python location and version
source venv/bin/activate
pip install --upgrade pip
pip install -e . --verbose
Quick Start¶
Hello Word Cofiguration¶
Elliot’s entry point is the function run_experiment
, which accepts a
configuration file that drives the whole experiment. In the following, a
sample configuration file is shown to demonstrate how a sample and
explicit structure can generate a rigorous experiment.
from elliot.run import run_experiment
run_experiment("configuration/file/path")
The following file is a simple configuration for an experimental setup. It contains all the instructions to get the MovieLens-1M catalog from a specific path and perform a train test split in a random sample way with a ratio of 20%.
This experiment provides a hyperparameter optimization with a grid search strategy for an Item-KNN model. Indeed, it is seen that the possible values of neighbors are closed in squared brackets. It indicates that two different models equipped with two different neighbors’ values will be trained and compared to select the best configuration. Moreover, this configuration obliges Elliot to save the recommendation lists with at most 10 items per user as suggest by top_k property.
In this basic experiment, only a simple metric is considered in the final evaluation study. The candidate metric is nDCG for a cutoff equal to top_k, unless otherwise noted.
experiment:
dataset: movielens_1m
data_config:
strategy: dataset
dataset_path: ../data/movielens_1m/dataset.tsv
splitting:
test_splitting:
strategy: random_subsampling
test_ratio: 0.2
models:
ItemKNN:
meta:
hyper_opt_alg: grid
save_recs: True
neighbors: [50, 100]
similarity: cosine
evaluation:
simple_metrics: [nDCG]
top_k: 10
Basic Configuration¶
In the first scenario, the experiments require comparing a group of RSs whose parameters are optimized via a grid-search.
The configuration specifies the data loading information, i.e., semantic features source files, in addition to the filtering and splitting strategies.
In particular, the latter supplies an entirely automated way of preprocessing the dataset, which is often a time-consuming and non-easily-reproducible phase.
The simple_metrics field allows computing accuracy and beyond-accuracy metrics, with two top-k cut-off values (5 and 10) by merely inserting the list of desired measures, e.g., [Precision, nDCG, …]. The knowledge-aware recommendation model, AttributeItemKNN, is compared against two baselines: Random and ItemKNN, along with a user-implemented model that is external.MostPop.
The configuration makes use of elliot’s feature of conducting a grid search-based hyperparameter optimization strategy by merely passing a list of possible hyperparameter values, e.g., neighbors: [50, 70, 100].
The reported models are selected according to nDCG@10.
To see the full configuration file please visit the following link_basic
To run the experiment use the following script_basic
Advanced Configuration¶
The second scenario depicts a more complex experimental setting. In the configuration, the user specifies an elaborate data splitting strategy, i.e., random_subsampling (for test splitting) and random_cross_validation (for model selection), by setting few splitting configuration fields.
The configuration does not provide a cut-off value, and thus a top-k field value of 50 is assumed as the cut-off.
Moreover, the evaluation section includes the UserMADrating metric.
Elliot considers it as a complex metric since it requires additional arguments.
The user also wants to implement a more advanced hyperparameter tuning optimization. For instance, regarding NeuMF, Bayesian optimization using Tree of Parzen Estimators is required (i.e., hyper_opt_alg: tpe) with a logarithmic uniform sampling for the learning rate search space.
Moreover, Elliot allows considering complex neural architecture search spaces by inserting lists of tuples. For instance, (32, 16, 8) indicates that the neural network consists of three hidden layers with 32, 16, and 8 units, respectively.
To see the full configuration file please visit the following link_advanced
To run the experiment use the following script_advanced
Configuration file¶
Input Data Configuration¶
The first key component of the config file is the data_config
section.
experiment:
data_config:
strategy: dataset|fixed|hierarchy
dataloader: KnowledgeChainsLoader|DataSetLoader
dataset_path: this/is/the/path.tsv
root_folder: this/is/the/path
train_path: this/is/the/path.tsv
validation_path: this/is/the/path.tsv
test_path: this/is/the/path.tsv
side_information:
feature_data: this/is/the/path.tsv
map: this/is/the/path.tsv
features: this/is/the/path.tsv
properties: this/is/the/path.conf
In this section, we can define which input files and how they should be loaded.
In the following, we will consider as datasets, tab-separated-value files that contain one interaction per row, in the format:
UserID
ItemID
Rating
[ TimeStamp
]
where TimeStamp
is optional.
Strategies¶
According to the kind of data we have, we can choose among three different loading strategies: dataset
, fixed
, hierarchy
.
dataset
assumes that the input data is NOT previously split in training, validation, and test set.
For this reason, ONLY if we adopt a dataset strategy we can later perform prefiltering and splitting operations.
dataset
takes just ONE default parameter: dataset_path
, which points to the stored dataset.
experiment:
data_config:
strategy: dataset
dataset_path: this/is/the/path.tsv
fixed
strategy assumes that our data has been previously split into training/validation/test sets or training/test sets.
Since data is supposed as previously split, no further prefiltering and splitting operation is contemplated.
fixed
takes two mandatory parameters: train_path
and test_path
, and one optional parameter, validation_path
.
experiment:
data_config:
strategy: fixed
train_path: this/is/the/path.tsv
validation_path: this/is/the/path.tsv
test_path: this/is/the/path.tsv
The last strategy is hierarchy
.
hierarchy
is designed to load a dataset that has been previously split and filtered with Elliot.
Here, the data is assumed as split and no further prefiltering and splitting operations are needed.
hierarchy
takes one mandatory parameter, root_folder
, that points to the folder where we previously stored the split files.
experiment:
data_config:
strategy: hierarchy
root_folder: this/is/the/path
Data Loaders¶
Within the data_config
section, we can also enable data-specific Data Loaders.
Each Data Loader is designed to handle a specific kind of additional data.
It is possible to enable a Data Loader by inserting the field dataloader
and passing the corresponding name.
For instance, the Visual Data Loader lets the user consider precomputed visual feature vectors or (inclusive) images.
To pass the required parameters to the Data Loader, we use a specific subsection, named side_information
.
There we can enable the required (by the specific Data Loader) fields and insert the corresponding values.
An example can be:
experiment:
data_config:
strategy: fixed
dataloader: VisualLoader
train_path: this/is/the/path.tsv
test_path: this/is/the/path.tsv
side_information:
feature_data: this/is/the/path/to/features.npy
For further details regarding the Data Loaders, please refer to the section.
Data Prefiltering¶
Elliot provides several prefiltering strategies. To enable Prefiltering operations, we can insert the corresponding block into our config file:
experiment:
prefiltering:
strategy: global_threshold|user_average|user_k_core|item_k_core|iterative_k_core|n_rounds_k_core|cold_users
threshold: 3|average
core: 5
rounds: 2
In detail, Elliot provides eight main prefiltering approaches: global_threshold
,
user_average
, user_k_core
, item_k_core
, iterative_k_core
, n_rounds_k_core
, cold_users
.
global_threshold
assumes a single system-wise threshold to filter out irrelevant transactions.
global_threshold
takes one mandatory parameter, threshold
.
threshold
takes, as values, a float (ratings >= threshold will be kept), or the string average. With average, the system computes the global mean of the rating values and filters out all the ratings below.
experiment:
prefiltering:
strategy: global_threshold
threshold: 3
experiment:
prefiltering:
strategy: global_threshold
threshold: average
user_average
has no parameters, and the system filters out the ratings below each user rating values mean.
experiment:
prefiltering:
strategy: user_average
user_k_core
filters out all the users with a number of transactions lower than the given k core.
It takes a parameter, core
, where the user passes an int corresponding to the desired value.
experiment:
prefiltering:
strategy: user_k_core
core: 5
item_k_core
filters out all the items with a number of transactions lower than the given k core.
It takes a parameter, core
, where the user passes an int corresponding to the desired value.
experiment:
prefiltering:
strategy: item_k_core
core: 5
iterative_k_core
runs iteratively user_k_core, and item_k_core until the dataset is no further modified.
It takes a parameter, core
, where the user passes an int corresponding to the desired value.
experiment:
prefiltering:
strategy: iterative_k_core
core: 5
n_rounds_k_core
runs iteratively user_k_core, and item_k_core for a specified number of rounds.
It takes the first parameter, core
, where the user passes an int corresponding to the desired value.
It takes the second parameter, rounds
, where the user passes an int corresponding to the desired value.
experiment:
prefiltering:
strategy: n_rounds_k_core
core: 5
rounds: 2
cold_users
filters out all the users with a number of interactions higher than a given threshold.
It takes a parameter, threshold
, where the user passes an int corresponding to the desired value.
experiment:
prefiltering:
strategy: cold_users
threshold: 3
Data Splitting¶
Elliot provides several splitting strategies. To enable the splitting operations, we can insert the corresponding section:
experiment:
splitting:
save_on_disk: True
save_folder: this/is/the/path/
test_splitting:
strategy: fixed_timestamp|temporal_hold_out|random_subsampling|random_cross_validation
timestamp: best|1609786061
test_ratio: 0.2
leave_n_out: 1
folds: 5
validation_splitting:
strategy: fixed_timestamp|temporal_hold_out|random_subsampling|random_cross_validation
timestamp: best|1609786061
test_ratio: 0.2
leave_n_out: 1
folds: 5
Before deepening the splitting configurations, we can configure Elliot to save on disk the split files, once the splitting operation is completed.
To this extent, we can insert two fields into the section: save_on_disk
, and save_folder
.
save_on_disk
enables the writing process, and save_folder
specifies the system location where to save the split files:
experiment:
splitting:
save_on_disk: True
save_folder: this/is/the/path/
Now, we can insert one (or two) specific subsections to detail the train/test, and the train/validation splitting via the corresponding fields:
test_splitting
, and validation_splitting
.
test_splitting
is clearly mandatory, while validation_splitting
is optional.
Since the two subsections follow the same guidelines, here we detail test_splitting
without loss of generality.
Elliot enables four splitting families: fixed_timestamp
, temporal_hold_out
, random_subsampling
, random_cross_validation
.
fixed_timestamp
assumes that there will be a specific timestamp to split prior interactions (train) and future interactions.
It takes the parameter timestamp
, that can assume one of two possible kind of values: a long corresponding to a specific timestamp, or the string best computed following Anelli et al.
experiment:
splitting:
test_splitting:
strategy: fixed_timestamp
timestamp: 1609786061
experiment:
splitting:
test_splitting:
strategy: fixed_timestamp
timestamp: best
temporal_hold_out
relies on a temporal split of user transactions. The split can be realized following two different approaches: a ratio-based and a leave-n-out-based approach.
If we enable the test_ratio
field with a float value, Elliot splits data retaining the last (100 * test_ratio
) % of the user transactions for the test set.
If we enable the leave_n_out
field with an int value, Elliot retains the last leave_n_out
transactions for the test set.
experiment:
splitting:
test_splitting:
strategy: temporal_hold_out
test_ratio: 0.2
experiment:
splitting:
test_splitting:
strategy: temporal_hold_out
leave_n_out: 1
random_subsampling
generalizes random hold-out strategy.
It takes a test_ratio
parameter with a float value to define the train/test ratio for user-based hold-out splitting.
Alternatively, it can take leave_n_out
with an int value to define the number of transaction retained for the test set.
Moreover, the splitting operation can be repeated enabling the folds
field and passing an int.
In that case, the overall splitting strategy corresponds to a user-based random subsampling strategy.
experiment:
splitting:
test_splitting:
strategy: random_subsampling
test_ratio: 0.2
experiment:
splitting:
test_splitting:
strategy: random_subsampling
test_ratio: 0.2
folds: 5
experiment:
splitting:
test_splitting:
strategy: random_subsampling
leave_n_out: 1
folds: 5
random_cross_validation
adopts a k-folds cross-validation splitting strategy.
It takes the parameter folds
with an int value, that defines the overall number of folds to consider.
experiment:
splitting:
test_splitting:
strategy: random_cross_validation
folds: 5
Dataset Name Configuration¶
Elliot needs a MANDATORY field, dataset
, that identifies the name of the dataset used for the experiment. This information is used in the majority of the experimental steps, to identify the experiment and save the files correctly:
experiment:
dataset: dataset_name
Output Configuration¶
Elliot lets the user specify where to store specific output files: the recommendation lists, the model weights, the evaluation results, and the logs:
experiment:
path_output_rec_result: this/is/the/path/
path_output_rec_weight: this/is/the/path/
path_output_rec_performance: this/is/the/path/
path_log_folder: this/is/the/path/
path_output_rec_result
lets the user define the path to the folder to store the recommendation lists.
path_output_rec_weight
lets the user define the path to the folder to store the model weights.
path_output_rec_performance
lets the user define the path to the folder to store the evaluation results.
path_log_folder
lets the user define the path to the folder to store the logs.
If not provided, Elliot creates a results folder in the parent folder of the config file location.
Inside it, Elliot creates an experiment-specific folder with the name of the dataset, and there it creates the recs/, weights/, and performance/ folders, respectively.
Moreover, Elliot creates a log/ folder in the parent folder of the config file location.
Evaluation Configuration¶
Elliot provides several facilities to evaluate recommendation systems. The majority of the evaluation techniques require the computation of user-specific recommendation lists (some techniques use recommendation systems to perform knowledge completion or other tasks).
To define the length of the user recommendation list, Elliot provides a specific mandatory field, top_k
, that takes an int representing the list length.
Beyond the former general definition, to specify the evaluation configuration, we can insert a specific section:
experiment:
top_k: 50
evaluation:
cutoffs: [10, 5]
simple_metrics: [ nDCG, Precision, Recall]
relevance_threshold: 1
paired_ttest: True
wilcoxon_test: True
complex_metrics:
- metric: DSC
beta: 2
- metric: SRecall
feature_data: this/is/the/path.tsv
In that section, we can detail the main characteristics of our experimental benchmark.
In particular, we can provide Elliot with the information regarding the metrics we want to compute. According to the metrics definition, some of them might require additional parameters or files. To make it easier for the user to pass metrics and optional arguments, Elliot partitions the metrics into simple_metrics and complex_metrics.
simple_metrics can be inserted as a field into the evaluation section, and it takes as a value the list of the metrics we want to compute. In the simple metrics set, we find all the metrics that DO NOT require any other additional parameter or file:
experiment:
top_k: 50
evaluation:
cutoffs: [10, 5]
simple_metrics: [ nDCG, Precision, Recall]
relevance_threshold: 1
The majority of the evaluation metrics relies on the notions of cut-off and relevance threshold.
The cut-off is the maximum length of the recommendation list we want to consider when computing the metric (it could be different from the top k).
To pass cut-off values, we can enable the cutoffs
field and pass a single value or a list of values. Elliot will compute the evaluation results for each considered cut-off.
If cutoffs field is not provided, top_k
value is assumed as a cut-off.
The relevance threshold is the minimum value of the rating to consider a test transaction relevant for the evaluation process.
We can pass the relevance threshold value to the corresponding relevance_threshold
field.
If not given, relevance_threshold is set to 0.
The set of metrics that require additional arguments is referred to as complex_metrics
.
The inclusion of the metrics follows the syntax:
experiment:
evaluation:
complex_metrics:
- metric: complex_metric_name_0
parameter_0: 2
- metric: complex_metric_name_1
parameter_1: this/is/the/path.tsv
where parameter_0 and parameter_1 are metric-specific parameters of any kind.
For further details about the available metrics, please see the corresponding section.
Finally, Elliot enables the computation of paired statistical hypothesis tests, namely, Wilcoxon, and Student’s paired t-tests.
To enable them, we can insert the corresponding boolean fields into the evaluation section:
experiment:
evaluation:
paired_ttest: True
wilcoxon_test: True
All the evaluation results are available in the performance folder at the end of the experiment.
Print evaluation results as triples¶
It is common in the Recommender Systems community to generate the evaluation tables with the format: [method,metric,value].
This choice easily lets use custom pivot tables on the data, and thus enabling several complex analysis. To obtain additional evaluation summaries in this format, insert the following field:
experiment:
print_results_as_triplets: True
Test the config file¶
Since an experiment may take a long time, a possible error in the configuration file in the last model configuration can lead to a severe waste of time. To avoid common mistakes in config file creation, Elliot provides a specific field that tests our configuration file before the actual run of the experiment. The feature can be activated as follows:
experiment:
config_test: True
GPU Acceleration¶
Elliot lets the user enable GPU acceleration with Tensorflow. To select the gpu on which we can run our experiments, use the following syntax:
experiment:
gpu: 1
If a negative value is passed, or the field is missing, the computation will take place on the CPU.
Please note that the configuration of tensorflow to work with GPUs is not covered by this guide. Please refer to the Tensorflow documentation for that.
Recommendation Model Configuration¶
To include the recommendation models, Elliot provides a straightforward syntax.
First, we can create a new section in the experiment, named models
:
experiment:
models:
Then, we can insert a list of recommendation models in which each model respects the following syntax:
experiment:
models:
model_0:
meta:
meta_parameter_0: something
model_parameter_0: something
model_parameter_1: something
model_parameter_2: something
model_1:
meta:
meta_parameter_0: something
model_parameter_0: something
model_parameter_1: something
model_parameter_2: something
meta is a mandatory field that lets the user define some parameters that all recommendation models share, but they can decline differently.
The decision to save model weights and recommendations, the choice of the validation metric and cut-off, the chosen hyperparameter tuning strategy, the verbosity, and the frequency of the evaluation during the training belong to this category.
In detail, use:
verbose
boolean field to enable verbose logs
save_recs
boolean field to enable recommendation lists storage
save_weights
boolean field to enable model weights storage
validation_metric
mixed field (string @ int) to define the simple metric and the cut-off used for the model selection. If not provided it takes the first provided simple metric, and the first cut-off.
validation_rate
int field: where applicable, define the iteration interval for the validation and test evaluation
hyper_opt_alg
string field: it defines the hyperparameter tuning strategy
hyper_max_evals
int field: where applicable, it defines the number of samples to consider for hyperparameter evaluation
To fully understand how to conduct hyperparameter optimization in Elliot, please refer to the corresponding section.
Finally, model_parameter_0, model_parameter_1, and model_parameter_2 represents the model-specific parameters.
For further details on model-specific parameters see the corresponding section.
Example:
experiment:
models:
KaHFMEmbeddings:
meta:
hyper_max_evals: 20
hyper_opt_alg: tpe
validation_rate: 1
verbose: True
save_weights: True
save_recs: True
validation_metric: nDCG@10
epochs: 100
batch_size: -1
lr: 0.0001
l_w: 0.005
l_b: 0
Data Preparation¶
The first key component of the config file is the data_config
section.
experiment:
data_config:
strategy: dataset|fixed|hierarchy
dataloader: KnowledgeChainsLoader|DataSetLoader
dataset_path: this/is/the/path.tsv
root_folder: this/is/the/path
train_path: this/is/the/path.tsv
validation_path: this/is/the/path.tsv
test_path: this/is/the/path.tsv
side_information:
feature_data: this/is/the/path.tsv
map: this/is/the/path.tsv
features: this/is/the/path.tsv
properties: this/is/the/path.conf
In this section, we can define which input files and how they should be loaded.
In the following, we will consider as datasets, tab-separated-value files that contain one interaction per row, in the format:
UserID
ItemID
Rating
[ TimeStamp
]
where TimeStamp
is optional.
Data preparation Strategies¶
According to the kind of data we have, we can choose among three different loading strategies: dataset
, fixed
, hierarchy
.
dataset
assumes that the input data is NOT previously split in training, validation, and test set.
For this reason, ONLY if we adopt a dataset strategy we can later perform prefiltering and splitting operations.
dataset
takes just ONE default parameter: dataset_path
, which points to the stored dataset.
experiment:
data_config:
strategy: dataset
dataset_path: this/is/the/path.tsv
fixed
strategy assumes that our data has been previously split into training/validation/test sets or training/test sets.
Since data is supposed as previously split, no further prefiltering and splitting operation is contemplated.
fixed
takes two mandatory parameters: train_path
and test_path
, and one optional parameter, validation_path
.
experiment:
data_config:
strategy: fixed
train_path: this/is/the/path.tsv
validation_path: this/is/the/path.tsv
test_path: this/is/the/path.tsv
The last strategy is hierarchy
.
hierarchy
is designed to load a dataset that has been previously split and filtered with Elliot.
Here, the data is assumed as split and no further prefiltering and splitting operations are needed.
hierarchy
takes one mandatory parameter, root_folder
, that points to the folder where we previously stored the split files.
experiment:
data_config:
strategy: hierarchy
root_folder: this/is/the/path
Loading Data¶
RSs experiments could require different data sources such as user-item feedback or additional side information,e.g., the visual features of an item images. To fulfill these requirements, Elliot comes with different implementations of the Loading module. Additionally, the user can design computationally expensive prefiltering and splitting procedures that can be stored and loaded to save future computation. Data-driven extensions can handle additional data like visual features, and semantic features extracted from knowledge graphs. Once a side-information-aware Loading module is chosen, it filters out the items devoiding the required information to grant a fair comparison.
Within the data_config
section, we can also enable data-specific Data Loaders.
Each Data Loader is designed to handle a specific kind of additional data.
It is possible to enable a Data Loader by inserting the field dataloader
and passing the corresponding name.
For instance, the Visual Data Loader lets the user consider precomputed visual feature vectors or (inclusive) images.
To pass the required parameters to the Data Loader, we use a specific subsection, named side_information
.
There we can enable the required (by the specific Data Loader) fields and insert the corresponding values.
An example can be:
experiment:
data_config:
strategy: fixed
dataloader: VisualLoader
train_path: this/is/the/path.tsv
test_path: this/is/the/path.tsv
side_information:
feature_data: this/is/the/path/to/features.npy
Data Loaders¶
Work in progress
Prefiltering Data¶
After data loading,Elliot provides data filtering operations through two possible strategies. The first strategy implemented in the Prefiltering module isFilter-by-rating, which drops off a user-item interaction if the preference score is smaller than a given threshold. It can be (i) a Numerical value, e.g.,3.5, (ii)a Distributional detail, e.g., global rating average value, or (iii) a user-based distributional (User Dist.) value, e.g., user’s average rat-ing value. The second prefiltering strategy,𝑘-core, filters out users,items, or both, with less than𝑘recorded interactions. The 𝑘-core strategy can proceed iteratively (Iterative𝑘-core) on both users and items until the 𝑘-core filtering condition is met, i.e., all the users and items have at least𝑘recorded interaction. Since reaching such condition might be intractable, Elliot allows specifying the maximum number of iterations (Iter-n-rounds). Finally, the Cold-Users filtering feature allows retaining cold-users only.
Elliot provides several prefiltering strategies. To enable Prefiltering operations, we can insert the corresponding block into our config file:
experiment:
prefiltering:
strategy: global_threshold|user_average|user_k_core|item_k_core|iterative_k_core|n_rounds_k_core|cold_users
threshold: 3|average
core: 5
rounds: 2
In detail, Elliot provides eight main prefiltering approaches: global_threshold
,
user_average
, user_k_core
, item_k_core
, iterative_k_core
, n_rounds_k_core
, cold_users
.
global_threshold
assumes a single system-wise threshold to filter out irrelevant transactions.
global_threshold
takes one mandatory parameter, threshold
.
threshold
takes, as values, a float (ratings >= threshold will be kept), or the string average. With average, the system computes the global mean of the rating values and filters out all the ratings below.
experiment:
prefiltering:
strategy: global_threshold
threshold: 3
experiment:
prefiltering:
strategy: global_threshold
threshold: average
user_average
has no parameters, and the system filters out the ratings below each user rating values mean.
experiment:
prefiltering:
strategy: user_average
user_k_core
filters out all the users with a number of transactions lower than the given k core.
It takes a parameter, core
, where the user passes an int corresponding to the desired value.
experiment:
prefiltering:
strategy: user_k_core
core: 5
item_k_core
filters out all the items with a number of transactions lower than the given k core.
It takes a parameter, core
, where the user passes an int corresponding to the desired value.
experiment:
prefiltering:
strategy: item_k_core
core: 5
iterative_k_core
runs iteratively user_k_core, and item_k_core until the dataset is no further modified.
It takes a parameter, core
, where the user passes an int corresponding to the desired value.
experiment:
prefiltering:
strategy: iterative_k_core
core: 5
n_rounds_k_core
runs iteratively user_k_core, and item_k_core for a specified number of rounds.
It takes the first parameter, core
, where the user passes an int corresponding to the desired value.
It takes the second parameter, rounds
, where the user passes an int corresponding to the desired value.
experiment:
prefiltering:
strategy: n_rounds_k_core
core: 5
rounds: 2
cold_users
filters out all the users with a number of interactions higher than a given threshold.
It takes a parameter, threshold
, where the user passes an int corresponding to the desired value.
experiment:
prefiltering:
strategy: cold_users
threshold: 3
Splitting Data¶
If needed, the data is served to the Splitting module. In detail, Elliot provides (i)Temporal, (ii)Random, and (iii)Fix strategies. The Temporal strategy splits the user-item interactions based on the transaction timestamp, i.e., fixing the timestamp, find-ing the optimal one, or adopting a hold-out (HO) mechanism. The Random strategy includes hold-out (HO),𝐾-repeated hold-out(K-HO), and cross-validation (CV). Table 1 provides further configuration details. Finally, the Fix strategy exploits a precomputed splitting.
Elliot provides several splitting strategies. To enable the splitting operations, we can insert the corresponding section:
experiment:
splitting:
save_on_disk: True
save_folder: this/is/the/path/
test_splitting:
strategy: fixed_timestamp|temporal_hold_out|random_subsampling|random_cross_validation
timestamp: best|1609786061
test_ratio: 0.2
leave_n_out: 1
folds: 5
validation_splitting:
strategy: fixed_timestamp|temporal_hold_out|random_subsampling|random_cross_validation
timestamp: best|1609786061
test_ratio: 0.2
leave_n_out: 1
folds: 5
Before deepening the splitting configurations, we can configure Elliot to save on disk the split files, once the splitting operation is completed.
To this extent, we can insert two fields into the section: save_on_disk
, and save_folder
.
save_on_disk
enables the writing process, and save_folder
specifies the system location where to save the split files:
experiment:
splitting:
save_on_disk: True
save_folder: this/is/the/path/
Now, we can insert one (or two) specific subsections to detail the train/test, and the train/validation splitting via the corresponding fields:
test_splitting
, and validation_splitting
.
test_splitting
is clearly mandatory, while validation_splitting
is optional.
Since the two subsections follow the same guidelines, here we detail test_splitting
without loss of generality.
Elliot enables four splitting families: fixed_timestamp
, temporal_hold_out
, random_subsampling
, random_cross_validation
.
fixed_timestamp
assumes that there will be a specific timestamp to split prior interactions (train) and future interactions.
It takes the parameter timestamp
, that can assume one of two possible kind of values: a long corresponding to a specific timestamp, or the string best computed following Anelli et al.
experiment:
splitting:
test_splitting:
strategy: fixed_timestamp
timestamp: 1609786061
experiment:
splitting:
test_splitting:
strategy: fixed_timestamp
timestamp: best
temporal_hold_out
relies on a temporal split of user transactions. The split can be realized following two different approaches: a ratio-based and a leave-n-out-based approach.
If we enable the test_ratio
field with a float value, Elliot splits data retaining the last (100 * test_ratio
) % of the user transactions for the test set.
If we enable the leave_n_out
field with an int value, Elliot retains the last leave_n_out
transactions for the test set.
experiment:
splitting:
test_splitting:
strategy: temporal_hold_out
test_ratio: 0.2
experiment:
splitting:
test_splitting:
strategy: temporal_hold_out
leave_n_out: 1
random_subsampling
generalizes random hold-out strategy.
It takes a test_ratio
parameter with a float value to define the train/test ratio for user-based hold-out splitting.
Alternatively, it can take leave_n_out
with an int value to define the number of transaction retained for the test set.
Moreover, the splitting operation can be repeated enabling the folds
field and passing an int.
In that case, the overall splitting strategy corresponds to a user-based random subsampling strategy.
experiment:
splitting:
test_splitting:
strategy: random_subsampling
test_ratio: 0.2
experiment:
splitting:
test_splitting:
strategy: random_subsampling
test_ratio: 0.2
folds: 5
experiment:
splitting:
test_splitting:
strategy: random_subsampling
leave_n_out: 1
folds: 5
random_cross_validation
adopts a k-folds cross-validation splitting strategy.
It takes the parameter folds
with an int value, that defines the overall number of folds to consider.
experiment:
splitting:
test_splitting:
strategy: random_cross_validation
folds: 5
Summary of the Recommendation Algorithms¶
Elliot integrates, to date, 50 recommendation models partitioned into two sets. The first set includes 38 popular models implemented in at least two of frameworks reviewed in this work (i.e., adopting a framework-wise popularity notion).
All the recommendation models inherit from a common abstract class:
The majority of the recommendation models uses a Mixin:
Adversarial Learning
|
Adversarial Matrix Factorization |
|
Adversarial Multimedia Recommender |
Algebric
Slope One Predictors for Online Rating-Based Collaborative Filtering |
Autoencoders
|
Variational Autoencoders for Collaborative Filtering |
|
Variational Autoencoders for Collaborative Filtering |
Content-Based
Vector Space Model |
Generative Adversarial Networks (GANs)
|
IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models |
|
CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks |
Graph-based
LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation |
|
|
Neural Graph Collaborative Filtering |
Knowledge-aware
|
Knowledge-aware Hybrid Factorization Machines |
Knowledge-aware Hybrid Factorization Machines (Tensorflow Batch Variant) |
|
|
Knowledge-aware Hybrid Factorization Machines (Tensorflow Embedding-based Variant) |
Latent Factor Models
Bayesian Personalized Ranking with Matrix Factorization |
|
Batch Bayesian Personalized Ranking with Matrix Factorization |
|
BPR Sparse Linear Methods |
|
|
Collaborative Metric Learning |
|
Field-aware Factorization Machines |
|
FISM: Factored Item Similarity Models |
Factorization Machines |
|
For further details, please refer to the paper |
|
|
Logistic Matrix Factorization |
Matrix Factorization |
|
|
Non-Negative Matrix Factorization |
|
Probabilistic Matrix Factorization |
For further details, please refer to the paper |
|
|
Sparse Linear Methods |
SVD++ |
|
|
Weighted XXX Matrix Factorization |
Artificial Neural Networks
Convolutional Matrix Factorization for Document Context-Aware Recommendation |
|
|
Outer Product-based Neural Collaborative Filtering |
|
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction |
Deep Matrix Factorization Models for Recommender Systems. |
|
|
Neural Collaborative Filtering |
AutoRec: Autoencoders Meet Collaborative Filtering (Item-based) |
|
|
NAIS: Neural Attentive Item Similarity Model for Recommendation |
Neural Collaborative Filtering |
|
|
Neural Factorization Machines for Sparse Predictive Analytics |
Neural Personalized Ranking for Image Recommendation (Model without visual features) |
|
AutoRec: Autoencoders Meet Collaborative Filtering (User-based) |
|
Wide & Deep Learning for Recommender Systems |
Neighborhood-based Models
|
Amazon.com recommendations: item-to-item collaborative filtering |
|
GroupLens: An Open Architecture for Collaborative Filtering of Netnews |
|
Attribute Item-kNN proposed in MyMediaLite Recommender System Library |
|
Attribute User-kNN proposed in MyMediaLite Recommender System Library |
Unpersonalized Recommenders
Visual Models
|
Attentive Collaborative Filtering: Multimedia Recommendation with Item- and Component-Level Attention |
DeepStyle: Learning User Preferences for Visual Recommendation |
|
Visually-Aware Fashion Recommendation and Design with Generative Image Models |
|
|
VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback |
|
Visual Neural Personalized Ranking for Image Recommendation |
Hyperparameter Optimization¶
Elliot provides hyperparameter tuning optimization integrating the functionalities of the HyperOpt library and extending it with exhaustive grid search.
Before continuing, let us recall how to include a recommendation system into an experiment:
experiment:
models:
PMF:
meta:
hyper_max_evals: 20
hyper_opt_alg: tpe
validation_rate: 1
verbose: True
save_weights: True
save_recs: True
validation_metric: nDCG@10
lr: 0.0025
epochs: 2
factors: 50
batch_size: 512
reg: 0.0025
reg_b: 0
gaussian_variance: 0.1
As we can observe, the meta section contains two fields that are related to hyperparameter optimization: hyper_max_evals
, and hyper_opt_alg
.
hyper_opt_alg
is a string field that defines the hyperparameter tuning strategy
hyper_opt_alg
can assume one of the following values: grid, tpe, *atpe, rand, mix, and anneal.
grid corresponds to exhaustive grid search
tpe stands for Tree of Parzen Estimators, a type of Bayesian Optimization, see the paper
atpe stands for Adaptive Tree of Parzen Estimators
rand stands for random sampling in the search space
mix stands for mixture of search algorithms
anneal stands for simulated annealing
hyper_max_evals
is an int field that, where applicable (all strategies but grid), defines the number of samples to consider for hyperparameter evaluation
Once we choose the search strategy, we need to define the search space. To this end, Elliot provides two alternatives: a value list, and a function-parameters pair.
In the former case, we just need to provide a list of values to the parameter we want to optimize:
experiment:
models:
PMF:
meta:
hyper_max_evals: 20
hyper_opt_alg: tpe
lr: 0.0025
epochs: 2
factors: 50
batch_size: 512
reg: [0.0025, 0.005, 0.01]
reg_b: 0
gaussian_variance: 0.1
In the latter case, we can choose among the search space functions provided by HyperOpt: choice, randint, uniform, quniform, loguniform, qloguniform, normal, qnormal, lognormal, qlognormal. Each function and its parameters are documented at the page in the section Parameters Expression.
Note that the label argument is internal and DO NOT have to provide it.
To teach Elliot to sample from any of these search spaces is straightforward: we pass to the parameter a list in which the first element is the function name, and the others are the parameter values.
An example of the syntax to define a search with loguniform for the learning rate parameter (lr) is:
experiment:
models:
PMF:
meta:
hyper_max_evals: 20
hyper_opt_alg: tpe
lr: [loguniform, -10, -1]
epochs: 2
factors: 50
batch_size: 512
reg: [0.0025, 0.005, 0.01]
reg_b: 0
gaussian_variance: 0.1
Finally, Elliot provides a shortcut to perform an exhaustive grid search.
We can avoid inserting hyper_opt_alg
and hyper_max_evals
fields and we directly insert the lists of possible values for the parameters to optimize:
experiment:
models:
PMF:
meta:
validation_rate: 1
verbose: True
save_weights: True
save_recs: True
validation_metric: nDCG@10
lr: [0.0025, 0.005, 0.01]
epochs: 50
factors: [10, 50, 100]
batch_size: 512
reg: [0.0025, 0.005, 0.01]
reg_b: 0
gaussian_variance: 0.1
In this case, Elliot recognizes that hyperparameter optimization is needed and automatically performs the grid search.
Create a new Recommendation Model¶
Work in progress
Recommendation Models¶
Adversarial Learning¶
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed id porta mi. Proin luctus sapien ut mauris facilisis, in faucibus quam cursus. Pellentesque eget lacus eros. Aenean eget molestie magna. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nam dapibus erat at scelerisque facilisis. Cras diam dolor, viverra et ipsum ac, ultrices lacinia eros.
Summary¶
|
Adversarial Matrix Factorization |
|
Adversarial Multimedia Recommender |
AMF¶
-
class
elliot.recommender.adversarial.AMF.AMF.
AMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Adversarial Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of latent factor
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
eps – Perturbation Budget
l_adv – Adversarial regularization coefficient
adversarial_epochs – Adversarial epochs
To include the recommendation model, add it to the config file adopting the following pattern:
models: AMF: meta: save_recs: True epochs: 10 factors: 200 lr: 0.001 l_w: 0.1 l_b: 0.001 eps: 0.1 l_adv: 0.001 adversarial_epochs: 10
AMR¶
-
class
elliot.recommender.adversarial.AMR.AMR.
AMR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Adversarial Multimedia Recommender
For further details, please refer to the paper
- Parameters
factors – Number of latent factor
factors_d – Image-feature dimensionality
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
l_e – Regularization coefficient of image matrix embedding
eps – Perturbation Budget
l_adv – Adversarial regularization coefficient
adversarial_epochs – Adversarial epochs
To include the recommendation model, add it to the config file adopting the following pattern:
models: AMR: meta: save_recs: True epochs: 10 factors: 200 factors_d: 20 lr: 0.001 l_w: 0.1 l_b: 0.001 l_e: 0.1 eps: 0.1 l_adv: 0.001 adversarial_epochs: 5
Algebric¶
Elliot integrates, to date, 50 recommendation models partitioned into two sets. The first set includes 38 popular models implemented in at least two of frameworks reviewed in this work (i.e., adopting a framework-wise popularity notion).
Summary¶
|
Slope One Predictors for Online Rating-Based Collaborative Filtering |
SlopeOne¶
-
class
elliot.recommender.algebric.slope_one.slope_one.
SlopeOne
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Slope One Predictors for Online Rating-Based Collaborative Filtering
For further details, please refer to the paper
To include the recommendation model, add it to the config file adopting the following pattern:
models: SlopeOne: meta: save_recs: True
Autoencoders¶
Elliot integrates, to date, 50 recommendation models partitioned into two sets. The first set includes 38 popular models implemented in at least two of frameworks reviewed in this work (i.e., adopting a framework-wise popularity notion).
Summary¶
|
Variational Autoencoders for Collaborative Filtering |
|
Variational Autoencoders for Collaborative Filtering |
MultiDAE¶
-
class
elliot.recommender.autoencoders.dae.multi_dae.
MultiDAE
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Variational Autoencoders for Collaborative Filtering
For further details, please refer to the paper
- Parameters
intermediate_dim – Number of intermediate dimension
latent_dim – Number of latent factors
reg_lambda – Regularization coefficient
lr – Learning rate
dropout_pkeep – Dropout probaility
To include the recommendation model, add it to the config file adopting the following pattern:
models: MultiDAE: meta: save_recs: True epochs: 10 intermediate_dim: 600 latent_dim: 200 reg_lambda: 0.01 lr: 0.001 dropout_pkeep: 1
MultiVAE¶
-
class
elliot.recommender.autoencoders.vae.multi_vae.
MultiVAE
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Variational Autoencoders for Collaborative Filtering
For further details, please refer to the paper
- Parameters
intermediate_dim – Number of intermediate dimension
latent_dim – Number of latent factors
reg_lambda – Regularization coefficient
lr – Learning rate
dropout_pkeep – Dropout probaility
To include the recommendation model, add it to the config file adopting the following pattern:
models: MultiVAE: meta: save_recs: True epochs: 10 intermediate_dim: 600 latent_dim: 200 reg_lambda: 0.01 lr: 0.001 dropout_pkeep: 1
Content-Based¶
Elliot integrates, to date, 50 recommendation models partitioned into two sets. The first set includes 38 popular models implemented in at least two of frameworks reviewed in this work (i.e., adopting a framework-wise popularity notion).
Summary¶
|
Vector Space Model |
VSM¶
-
class
elliot.recommender.content_based.VSM.vector_space_model.
VSM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Vector Space Model
For further details, please refer to the paper and the paper
- Parameters
similarity – Similarity metric
user_profile –
item_profile –
To include the recommendation model, add it to the config file adopting the following pattern:
models: VSM: meta: save_recs: True similarity: cosine user_profile: binary item_profile: binary
Generative Adversarial Networks (GANs)¶
Elliot integrates, to date, 50 recommendation models partitioned into two sets. The first set includes 38 popular models implemented in at least two of frameworks reviewed in this work (i.e., adopting a framework-wise popularity notion).
Summary¶
|
IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models |
|
CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks |
IRGAN¶
-
class
elliot.recommender.gan.IRGAN.irgan.
IRGAN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models
For further details, please refer to the paper
- Parameters
factors – Number of latent factor
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
l_gan – Adversarial regularization coefficient
predict_model – Specification of the model to generate the recommendation (Generator/ Discriminator)
g_epochs – Number of epochs to train the generator for each IRGAN step
d_epochs – Number of epochs to train the discriminator for each IRGAN step
g_pretrain_epochs – Number of epochs to pre-train the generator
d_pretrain_epochs – Number of epochs to pre-train the discriminator
sample_lambda – Temperature Parameters
To include the recommendation model, add it to the config file adopting the following pattern:
models: IRGAN: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 l_w: 0.1 l_b: 0.001 l_gan: 0.001 predict_model: generator g_epochs: 5 d_epochs: 1 g_pretrain_epochs: 10 d_pretrain_epochs: 10 sample_lambda: 0.2
CFGAN¶
-
class
elliot.recommender.gan.CFGAN.cfgan.
CFGAN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks
For further details, please refer to the paper
- Parameters
factors – Number of latent factor
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
l_gan – Adversarial regularization coefficient
g_epochs – Number of epochs to train the generator for each IRGAN step
d_epochs – Number of epochs to train the discriminator for each IRGAN step
s_zr – Sampling parameter of zero-reconstruction
s_pm – Sampling parameter of partial-masking
To include the recommendation model, add it to the config file adopting the following pattern:
models: CFGAN: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 l_w: 0.1 l_b: 0.001 l_gan: 0.001 g_epochs: 5 d_epochs: 1 s_zr: 0.001 s_pm: 0.001
Graph-based¶
Elliot integrates, to date, 50 recommendation models partitioned into two sets. The first set includes 38 popular models implemented in at least two of frameworks reviewed in this work (i.e., adopting a framework-wise popularity notion).
Summary¶
|
LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation |
|
Neural Graph Collaborative Filtering |
LightGCN¶
-
class
elliot.recommender.graph_based.lightgcn.LightGCN.
LightGCN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
batch_size – Batch size
l_w – Regularization coefficient
n_layers – Number of embedding propagation layers
n_fold – Number of folds to split the adjacency matrix into sub-matrices and ease the computation
To include the recommendation model, add it to the config file adopting the following pattern:
models: LightGCN: meta: save_recs: True lr: 0.0005 epochs: 50 factors: 64 batch_size: 256 l_w: 0.1 n_layers: 1 n_fold: 5
NGCF¶
-
class
elliot.recommender.graph_based.ngcf.NGCF.
NGCF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Neural Graph Collaborative Filtering
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
batch_size – Batch size
l_w – Regularization coefficient
weight_size – Tuple with number of units for each embedding propagation layer
node_dropout – Tuple with dropout rate for each node
message_dropout – Tuple with dropout rate for each embedding propagation layer
n_fold – Number of folds to split the adjacency matrix into sub-matrices and ease the computation
To include the recommendation model, add it to the config file adopting the following pattern:
models: NGCF: meta: save_recs: True lr: 0.0005 epochs: 50 factors: 64 batch_size: 256 l_w: 0.1 weight_size: (64,) node_dropout: () message_dropout: (0.1,) n_fold: 5
Knowledge-aware¶
Elliot integrates, to date, 50 recommendation models partitioned into two sets. The first set includes 38 popular models implemented in at least two of frameworks reviewed in this work (i.e., adopting a framework-wise popularity notion).
Summary¶
|
Knowledge-aware Hybrid Factorization Machines |
|
Knowledge-aware Hybrid Factorization Machines (Tensorflow Batch Variant) |
Knowledge-aware Hybrid Factorization Machines (Tensorflow Embedding-based Variant) |
KaHFM¶
-
class
elliot.recommender.knowledge_aware.kaHFM.ka_hfm.
KaHFM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Knowledge-aware Hybrid Factorization Machines
Vito Walter Anelli and Tommaso Di Noia and Eugenio Di Sciascio and Azzurra Ragone and Joseph Trotta “How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs”, ISWC 2019 Best student Research Paper For further details, please refer to the paper
Vito Walter Anelli and Tommaso Di Noia and Eugenio Di Sciascio and Azzurra Ragone and Joseph Trotta “Semantic Interpretation of Top-N Recommendations”, IEEE TKDE 2020 For further details, please refer to the paper
- Parameters
lr – learning rate (default: 0.05)
bias_regularization – Bias regularization (default: 0)
user_regularization – User regularization (default: 0.0025)
positive_item_regularization – regularization for positive (experienced) items (default: 0.0025)
negative_item_regularization – regularization for unknown items (default: 0.00025)
update_negative_item_factors – Boolean to update negative item factors (default: True)
update_users – Boolean to update user factors (default: True)
update_items – Boolean to update item factors (default: True)
update_bias – Boolean to update bias value (default: True)
To include the recommendation model, add it to the config file adopting the following pattern:
models: KaHFM: meta: hyper_max_evals: 20 hyper_opt_alg: tpe validation_rate: 1 verbose: True save_weights: True save_recs: True validation_metric: nDCG@10 epochs: 100 batch_size: -1 lr: 0.05 bias_regularization: 0 user_regularization: 0.0025 positive_item_regularization: 0.0025 negative_item_regularization: 0.00025 update_negative_item_factors: True update_users: True update_items: True update_bias: True
KaHFM Batch¶
-
class
elliot.recommender.knowledge_aware.kaHFM_batch.kahfm_batch.
KaHFMBatch
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Knowledge-aware Hybrid Factorization Machines (Tensorflow Batch Variant)
Vito Walter Anelli and Tommaso Di Noia and Eugenio Di Sciascio and Azzurra Ragone and Joseph Trotta “How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs”, ISWC 2019 Best student Research Paper For further details, please refer to the paper
Vito Walter Anelli and Tommaso Di Noia and Eugenio Di Sciascio and Azzurra Ragone and Joseph Trotta “Semantic Interpretation of Top-N Recommendations”, IEEE TKDE 2020 For further details, please refer to the paper
- Parameters
lr – learning rate (default: 0.0001)
l_w – Weight regularization (default: 0.005)
l_b – Bias regularization (default: 0)
To include the recommendation model, add it to the config file adopting the following pattern:
models: KaHFMBatch: meta: hyper_max_evals: 20 hyper_opt_alg: tpe validation_rate: 1 verbose: True save_weights: True save_recs: True validation_metric: nDCG@10 epochs: 100 batch_size: -1 lr: 0.0001 l_w: 0.005 l_b: 0
KaHFM Embeddings¶
-
class
elliot.recommender.knowledge_aware.kahfm_embeddings.kahfm_embeddings.
KaHFMEmbeddings
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Knowledge-aware Hybrid Factorization Machines (Tensorflow Embedding-based Variant)
Vito Walter Anelli and Tommaso Di Noia and Eugenio Di Sciascio and Azzurra Ragone and Joseph Trotta “How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs”, ISWC 2019 Best student Research Paper For further details, please refer to the paper
Vito Walter Anelli and Tommaso Di Noia and Eugenio Di Sciascio and Azzurra Ragone and Joseph Trotta “Semantic Interpretation of Top-N Recommendations”, IEEE TKDE 2020 For further details, please refer to the paper
- Parameters
lr – learning rate (default: 0.0001)
l_w – Weight regularization (default: 0.005)
l_b – Bias regularization (default: 0)
To include the recommendation model, add it to the config file adopting the following pattern:
models: KaHFMEmbeddings: meta: hyper_max_evals: 20 hyper_opt_alg: tpe validation_rate: 1 verbose: True save_weights: True save_recs: True validation_metric: nDCG@10 epochs: 100 batch_size: -1 lr: 0.0001 l_w: 0.005 l_b: 0
Latent Factor Models¶
Elliot integrates, to date, 50 recommendation models partitioned into two sets. The first set includes 38 popular models implemented in at least two of frameworks reviewed in this work (i.e., adopting a framework-wise popularity notion).
Summary¶
|
Bayesian Personalized Ranking with Matrix Factorization |
|
Batch Bayesian Personalized Ranking with Matrix Factorization |
|
BPR Sparse Linear Methods |
|
Collaborative Metric Learning |
Field-aware Factorization Machines |
|
|
FISM: Factored Item Similarity Models |
|
Factorization Machines |
|
For further details, please refer to the paper |
|
Logistic Matrix Factorization |
|
Matrix Factorization |
Non-Negative Matrix Factorization |
|
Probabilistic Matrix Factorization |
|
|
For further details, please refer to the paper |
|
Sparse Linear Methods |
|
SVD++ |
|
Weighted XXX Matrix Factorization |
BPRMF¶
-
class
elliot.recommender.latent_factor_models.BPRMF.BPRMF.
BPRMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Bayesian Personalized Ranking with Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
bias_regularization – Regularization coefficient for the bias
user_regularization – Regularization coefficient for user latent factors
positive_item_regularization – Regularization coefficient for positive item latent factors
negative_item_regularization – Regularization coefficient for negative item latent factors
update_negative_item_factors –
update_users –
update_items –
update_bias –
To include the recommendation model, add it to the config file adopting the following pattern:
models: BPRMF: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 bias_regularization: 0 user_regularization: 0.0025 positive_item_regularization: 0.0025 negative_item_regularization: 0.0025 update_negative_item_factors: True update_users: True update_items: True update_bias: True
BPRMF_batch¶
-
class
elliot.recommender.latent_factor_models.BPRMF_batch.BPRMF_batch.
BPRMF_batch
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Batch Bayesian Personalized Ranking with Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
l_w – Regularization coefficient for latent factors
l_b – Regularization coefficient for bias
To include the recommendation model, add it to the config file adopting the following pattern:
models: BPRMF_batch: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 l_w: 0.1 l_b: 0.001
BPRSlim¶
-
class
elliot.recommender.latent_factor_models.BPRSlim.bprslim.
BPRSlim
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
BPR Sparse Linear Methods
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
lj_reg – Regularization coefficient for positive items
li_reg – Regularization coefficient for negative items
To include the recommendation model, add it to the config file adopting the following pattern:
models: AMF: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 lj_reg: 0.001 li_reg: 0.1
CML¶
-
class
elliot.recommender.latent_factor_models.CML.CML.
CML
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Collaborative Metric Learning
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
l_w – Regularization coefficient for latent factors
l_b – Regularization coefficient for bias
margin – Safety margin size
To include the recommendation model, add it to the config file adopting the following pattern:
models: CML: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 l_w: 0.001 l_b: 0.001 margin: 0.5
FFM¶
-
class
elliot.recommender.latent_factor_models.FFM.field_aware_factorization_machine.
FFM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Field-aware Factorization Machines
For further details, please refer to the paper
- Parameters
factors – Number of factors of feature embeddings
lr – Learning rate
reg – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: FFM: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 reg: 0.1
FISM¶
-
class
elliot.recommender.latent_factor_models.FISM.FISM.
FISM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
FISM: Factored Item Similarity Models
For further details, please refer to the paper
- Parameters
factors – Number of factors of feature embeddings
lr – Learning rate
l_w – Regularization coefficient for latent factors
l_b – Regularization coefficient for bias
alpha – Alpha parameter (a value between 0 and 1)
neg_ratio –
To include the recommendation model, add it to the config file adopting the following pattern:
models: FISM: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 l_w: 0.001 l_b: 0.001 alpha: 0.5 neg_ratio:
FM¶
-
class
elliot.recommender.latent_factor_models.FM.factorization_machine.
FM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Factorization Machines
For further details, please refer to the paper
- Parameters
factors – Number of factors of feature embeddings
lr – Learning rate
reg – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: FM: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 reg: 0.1
FunkSVD¶
-
class
elliot.recommender.latent_factor_models.FunkSVD.funk_svd.
FunkSVD
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
For further details, please refer to the paper
- Parameters
factors – Number of factors of feature embeddings
lr – Learning rate
reg_w – Regularization coefficient for latent factors
reg_b – Regularization coefficient for bias
To include the recommendation model, add it to the config file adopting the following pattern:
models: FunkSVD: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 reg_w: 0.1 reg_b: 0.001
LogisticMatrixFactorization¶
-
class
elliot.recommender.latent_factor_models.LogisticMF.logistic_matrix_factorization.
LogisticMatrixFactorization
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Logistic Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of factors of feature embeddings
lr – Learning rate
reg – Regularization coefficient
alpha – Parameter for confidence estimation
To include the recommendation model, add it to the config file adopting the following pattern:
models: LogisticMatrixFactorization: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 reg: 0.1 alpha: 0.5
NonNegMF¶
-
class
elliot.recommender.latent_factor_models.NonNegMF.non_negative_matrix_factorization.
NonNegMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Non-Negative Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
reg – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: NonNegMF: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 reg: 0.1
PMF¶
-
class
elliot.recommender.latent_factor_models.PMF.probabilistic_matrix_factorization.
PMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Probabilistic Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
reg – Regularization coefficient
gaussian_variance – Variance of the Gaussian distribution
To include the recommendation model, add it to the config file adopting the following pattern:
models: PMF: meta: save_recs: True epochs: 10 factors: 50 lr: 0.001 reg: 0.0025 gaussian_variance: 0.1
PureSVD¶
-
class
elliot.recommender.latent_factor_models.PureSVD.pure_svd.
PureSVD
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
seed – Random seed
To include the recommendation model, add it to the config file adopting the following pattern:
models: PureSVD: meta: save_recs: True epochs: 10 factors: 10 seed: 42
Slim¶
-
class
elliot.recommender.latent_factor_models.Slim.slim.
Slim
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Sparse Linear Methods
For further details, please refer to the paper
- Parameters
l1_ratio –
alpha –
To include the recommendation model, add it to the config file adopting the following pattern:
models: Slim: meta: save_recs: True epochs: 10 l1_ratio: 0.001 alpha: 0.001
SVDpp¶
-
class
elliot.recommender.latent_factor_models.SVDpp.svdpp.
SVDpp
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
SVD++
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
reg_w – Regularization coefficient for latent factors
reg_b – Regularization coefficient for bias
To include the recommendation model, add it to the config file adopting the following pattern:
models: SVDpp: meta: save_recs: True epochs: 10 factors: 50 lr: 0.001 reg_w: 0.1 reg_b: 0.001
WRMF¶
-
class
elliot.recommender.latent_factor_models.WRMF.wrmf.
WRMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Weighted XXX Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
alpha –
reg – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: WRMF: meta: save_recs: True epochs: 10 factors: 50 alpha: 1 reg: 0.1
Artificial Neural Networks¶
Elliot integrates, to date, 50 recommendation models partitioned into two sets. The first set includes 38 popular models implemented in at least two of frameworks reviewed in this work (i.e., adopting a framework-wise popularity notion).
Summary¶
Convolutional Matrix Factorization for Document Context-Aware Recommendation |
|
|
Outer Product-based Neural Collaborative Filtering |
|
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction |
|
Deep Matrix Factorization Models for Recommender Systems. |
Neural Collaborative Filtering |
|
|
AutoRec: Autoencoders Meet Collaborative Filtering (Item-based) |
|
NAIS: Neural Attentive Item Similarity Model for Recommendation |
Neural Collaborative Filtering |
|
|
Neural Factorization Machines for Sparse Predictive Analytics |
|
Neural Personalized Ranking for Image Recommendation (Model without visual features) |
|
AutoRec: Autoencoders Meet Collaborative Filtering (User-based) |
Wide & Deep Learning for Recommender Systems |
ConvMF¶
-
class
elliot.recommender.neural.ConvMF.convolutional_matrix_factorization.
ConvMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Convolutional Matrix Factorization for Document Context-Aware Recommendation
For further details, please refer to the paper
- Parameters
embedding_size – Embedding dimension
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
cnn_channels – List of channels
cnn_kernels – List of kernels
cnn_strides – List of strides
dropout_prob – Dropout probability applied on the convolutional layers
To include the recommendation model, add it to the config file adopting the following pattern:
models: ConvMF: meta: save_recs: True epochs: 10 embedding_size: 100 lr: 0.001 l_w: 0.005 l_b: 0.0005 cnn_channels: (1, 32, 32) cnn_kernels: (2,2) cnn_strides: (2,2) dropout_prob: 0
ConvNeuMF¶
-
class
elliot.recommender.neural.ConvNeuMF.convolutional_neural_matrix_factorization.
ConvNeuMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Outer Product-based Neural Collaborative Filtering
For further details, please refer to the paper
- Parameters
embedding_size – Embedding dimension
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
cnn_channels – List of channels
cnn_kernels – List of kernels
cnn_strides – List of strides
dropout_prob – Dropout probability applied on the convolutional layers
To include the recommendation model, add it to the config file adopting the following pattern:
models: ConvNeuMF: meta: save_recs: True epochs: 10 embedding_size: 100 lr: 0.001 l_w: 0.005 l_b: 0.0005 cnn_channels: (1, 32, 32) cnn_kernels: (2,2) cnn_strides: (2,2) dropout_prob: 0
DeepFM¶
-
class
elliot.recommender.neural.DeepFM.deep_fm.
DeepFM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
For further details, please refer to the paper
- Parameters
factors – Number of factors dimension
lr – Learning rate
l_w – Regularization coefficient
hidden_neurons – List of units for each layer
hidden_activations – List of activation functions
To include the recommendation model, add it to the config file adopting the following pattern:
models: DeepFM: meta: save_recs: True epochs: 10 factors: 100 lr: 0.001 l_w: 0.0001 hidden_neurons: (64,32) hidden_activations: ('relu','relu')
DMF¶
-
class
elliot.recommender.neural.DMF.deep_matrix_factorization.
DMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Deep Matrix Factorization Models for Recommender Systems.
For further details, please refer to the paper
- Parameters
lr – Learning rate
reg – Regularization coefficient
user_mlp – List of units for each layer
item_mlp – List of activation functions
similarity – Number of factors dimension
To include the recommendation model, add it to the config file adopting the following pattern:
models: DMF: meta: save_recs: True epochs: 10 lr: 0.0001 reg: 0.001 user_mlp: (64,32) item_mlp: (64,32) similarity: cosine
GMF¶
-
class
elliot.recommender.neural.GeneralizedMF.generalized_matrix_factorization.
GMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Neural Collaborative Filtering
For further details, please refer to the paper
- Parameters
mf_factors – Number of latent factors
lr – Learning rate
is_edge_weight_train – Whether the training uses edge weighting
To include the recommendation model, add it to the config file adopting the following pattern:
models: GMF: meta: save_recs: True epochs: 10 mf_factors: 10 lr: 0.001 is_edge_weight_train: True
ItemAutoRec¶
-
class
elliot.recommender.neural.ItemAutoRec.itemautorec.
ItemAutoRec
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
AutoRec: Autoencoders Meet Collaborative Filtering (Item-based)
For further details, please refer to the paper
- Parameters
hidden_neuron – List of units for each layer
lr – Learning rate
l_w – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: ItemAutoRec: meta: save_recs: True epochs: 10 hidden_neuron: 500 lr: 0.0001 l_w: 0.001
NAIS¶
-
class
elliot.recommender.neural.NAIS.nais.
NAIS
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
NAIS: Neural Attentive Item Similarity Model for Recommendation
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
algorithm – Type of user-item factor operation (‘product’, ‘concat’)
weight_size – List of units for each layer
lr – Learning rate
l_w – Regularization coefficient
l_b – Bias regularization coefficient
alpha – Attention factor
beta – Smoothing exponent
neg_ratio – Ratio of negative sampled items, e.g., 0 = no items, 1 = all un-rated items
To include the recommendation model, add it to the config file adopting the following pattern:
models: NAIS: meta: save_recs: True factors: 100 algorithm: concat weight_size: 32 lr: 0.001 l_w: 0.001 l_b: 0.001 alpha: 0.5 beta: 0.5 neg_ratio: 0.5
NeuMF¶
-
class
elliot.recommender.neural.NeuMF.neural_matrix_factorization.
NeuMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Neural Collaborative Filtering
For further details, please refer to the paper
- Parameters
mf_factors – Number of MF latent factors
mlp_factors – Number of MLP latent factors
mlp_hidden_size – List of units for each layer
lr – Learning rate
dropout – Dropout rate
is_mf_train – Whether to train the MF embeddings
is_mlp_train – Whether to train the MLP layers
To include the recommendation model, add it to the config file adopting the following pattern:
models: NeuMF: meta: save_recs: True epochs: 10 mf_factors: 10 mlp_factors: 10 mlp_hidden_size: (64,32) lr: 0.001 dropout: 0.0 is_mf_train: True is_mlp_train: True
NFM¶
-
class
elliot.recommender.neural.NFM.neural_fm.
NFM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Neural Factorization Machines for Sparse Predictive Analytics
For further details, please refer to the paper
- Parameters
factors – Number of factors dimension
lr – Learning rate
l_w – Regularization coefficient
hidden_neurons – List of units for each layer
hidden_activations – List of activation functions
To include the recommendation model, add it to the config file adopting the following pattern:
models: NFM: meta: save_recs: True epochs: 10 factors: 100 lr: 0.001 l_w: 0.0001 hidden_neurons: (64,32) hidden_activations: ('relu','relu')
NPR¶
-
class
elliot.recommender.neural.NPR.neural_personalized_ranking.
NPR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Neural Personalized Ranking for Image Recommendation (Model without visual features)
For further details, please refer to the paper
- Parameters
mf_factors – Number of MF latent factors
mlp_hidden_size – List of units for each layer
lr – Learning rate
l_w – Regularization coefficient
dropout – Dropout rate
To include the recommendation model, add it to the config file adopting the following pattern:
models: NPR: meta: save_recs: True epochs: 10 mf_factors: 100 mlp_hidden_size: (64,32) lr: 0.001 l_w: 0.001 dropout: 0.45
UserAutoRec¶
-
class
elliot.recommender.neural.UserAutoRec.userautorec.
UserAutoRec
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
AutoRec: Autoencoders Meet Collaborative Filtering (User-based)
For further details, please refer to the paper
- Parameters
hidden_neuron – List of units for each layer
lr – Learning rate
l_w – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: UserAutoRec: meta: save_recs: True epochs: 10 hidden_neuron: 500 lr: 0.0001 l_w: 0.001
WideAndDeep¶
-
class
elliot.recommender.neural.WideAndDeep.wide_and_deep.
WideAndDeep
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Wide & Deep Learning for Recommender Systems
(For now, available with knowledge-aware features)
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
mlp_hidden_size – List of units for each layer
lr – Learning rate
l_w – Regularization coefficient
l_b – Bias Regularization Coefficient
dropout_prob – Dropout rate
To include the recommendation model, add it to the config file adopting the following pattern:
models: NPR: meta: save_recs: True epochs: 10 factors: 50 mlp_hidden_size: (32, 32, 1) lr: 0.001 l_w: 0.005 l_b: 0.0005 dropout_prob: 0.0
Neighborhood-based Models¶
Elliot integrates, to date, 50 recommendation models partitioned into two sets. The first set includes 38 popular models implemented in at least two of frameworks reviewed in this work (i.e., adopting a framework-wise popularity notion).
Summary¶
|
Amazon.com recommendations: item-to-item collaborative filtering |
|
GroupLens: An Open Architecture for Collaborative Filtering of Netnews |
Attribute Item-kNN proposed in MyMediaLite Recommender System Library |
|
Attribute User-kNN proposed in MyMediaLite Recommender System Library |
ItemKNN¶
-
class
elliot.recommender.NN.item_knn.item_knn.
ItemKNN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Amazon.com recommendations: item-to-item collaborative filtering
For further details, please refer to the paper
- Parameters
neighbors – Number of item neighbors
similarity – Similarity function
implementation – Implementation type (‘aiolli’, ‘classical’)
To include the recommendation model, add it to the config file adopting the following pattern:
models: ItemKNN: meta: save_recs: True neighbors: 40 similarity: cosine implementation: aiolli
UserKNN¶
-
class
elliot.recommender.NN.user_knn.user_knn.
UserKNN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
GroupLens: An Open Architecture for Collaborative Filtering of Netnews
For further details, please refer to the paper
- Parameters
neighbors – Number of item neighbors
similarity – Similarity function
implementation – Implementation type (‘aiolli’, ‘classical’)
To include the recommendation model, add it to the config file adopting the following pattern:
models: UserKNN: meta: save_recs: True neighbors: 40 similarity: cosine implementation: aiolli
AttributeItemKNN¶
-
class
elliot.recommender.NN.attribute_item_knn.attribute_item_knn.
AttributeItemKNN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Attribute Item-kNN proposed in MyMediaLite Recommender System Library
For further details, please refer to the paper
- Parameters
neighbors – Number of item neighbors
similarity – Similarity function
To include the recommendation model, add it to the config file adopting the following pattern:
models: AttributeItemKNN: meta: save_recs: True neighbors: 40 similarity: cosine
AttributeUserKNN¶
-
class
elliot.recommender.NN.attribute_user_knn.attribute_user_knn.
AttributeUserKNN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Attribute User-kNN proposed in MyMediaLite Recommender System Library
For further details, please refer to the paper
- Parameters
neighbors – Number of item neighbors
similarity – Similarity function
profile – Profile type (‘binary’, ‘tfidf’)
To include the recommendation model, add it to the config file adopting the following pattern:
models: AttributeUserKNN: meta: save_recs: True neighbors: 40 similarity: cosine profile: binary
Unpersonalized Recommenders¶
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed id porta mi. Proin luctus sapien ut mauris facilisis, in faucibus quam cursus. Pellentesque eget lacus eros. Aenean eget molestie magna. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nam dapibus erat at scelerisque facilisis. Cras diam dolor, viverra et ipsum ac, ultrices lacinia eros.
Summary¶
|
|
|
Most Popular¶
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed id porta mi. Proin luctus sapien ut mauris facilisis, in faucibus quam cursus. Pellentesque eget lacus eros. Aenean eget molestie magna. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nam dapibus erat at scelerisque facilisis. Cras diam dolor, viverra et ipsum ac, ultrices lacinia eros.
-
class
elliot.recommender.unpersonalized.most_popular.most_popular.
MostPop
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
To include the recommendation model, add it to the config file adopting the following pattern:¶
models:
MostPop:
meta:
save_recs: True
Random Recommender¶
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed id porta mi. Proin luctus sapien ut mauris facilisis, in faucibus quam cursus. Pellentesque eget lacus eros. Aenean eget molestie magna. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nam dapibus erat at scelerisque facilisis. Cras diam dolor, viverra et ipsum ac, ultrices lacinia eros.
-
class
elliot.recommender.unpersonalized.random_recommender.Random.
Random
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
To include the recommendation model, add it to the config file adopting the following pattern:¶
models:
Random:
meta:
save_recs: True
random_seed: 42
Visual Models¶
Elliot integrates, to date, 50 recommendation models partitioned into two sets. The first set includes 38 popular models implemented in at least two of frameworks reviewed in this work (i.e., adopting a framework-wise popularity notion).
Summary¶
|
Attentive Collaborative Filtering: Multimedia Recommendation with Item- and Component-Level Attention |
|
DeepStyle: Learning User Preferences for Visual Recommendation |
|
Visually-Aware Fashion Recommendation and Design with Generative Image Models |
|
VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback |
Visual Neural Personalized Ranking for Image Recommendation |
|
Adversarial Multimedia Recommender |
ACF¶
-
class
elliot.recommender.visual_recommenders.ACF.ACF.
ACF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Attentive Collaborative Filtering: Multimedia Recommendation with Item- and Component-Level Attention
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
batch_size – Batch size
l_w – Regularization coefficient
layers_component – Tuple with number of units for each attentive layer (component-level)
layers_item – Tuple with number of units for each attentive layer (item-level)
To include the recommendation model, add it to the config file adopting the following pattern:
models: ACF: meta: save_recs: True lr: 0.0005 epochs: 50 factors: 100 batch_size: 128 l_w: 0.000025 layers_component: (64, 1) layers_item: (64, 1)
DeepStyle¶
-
class
elliot.recommender.visual_recommenders.DeepStyle.DeepStyle.
DeepStyle
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
DeepStyle: Learning User Preferences for Visual Recommendation
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
batch_size – Batch size
l_w – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: DeepStyle: meta: save_recs: True lr: 0.0005 epochs: 50 factors: 100 batch_size: 128 l_w: 0.000025
DVBPR¶
-
class
elliot.recommender.visual_recommenders.DVBPR.DVBPR.
DVBPR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Visually-Aware Fashion Recommendation and Design with Generative Image Models
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
batch_size – Batch size
lambda_1 – Regularization coefficient
lambda_2 – CNN regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: DVBPR: meta: save_recs: True lr: 0.0001 epochs: 50 factors: 100 batch_size: 128 lambda_1: 0.0001 lambda_2: 1.0
VBPR¶
-
class
elliot.recommender.visual_recommenders.VBPR.VBPR.
VBPR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
factors_d – Dimension of visual factors
batch_size – Batch size
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
l_e – Regularization coefficient of projection matrix
To include the recommendation model, add it to the config file adopting the following pattern:
models: VBPR: meta: save_recs: True lr: 0.0005 epochs: 50 factors: 100 factors_d: 20 batch_size: 128 l_w: 0.000025 l_b: 0 l_e: 0.002
VNPR¶
-
class
elliot.recommender.visual_recommenders.VNPR.visual_neural_personalized_ranking.
VNPR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Visual Neural Personalized Ranking for Image Recommendation
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
mf_factors: – Number of latent factors for Matrix Factorization:
mlp_hidden_size – Tuple with number of units for each multi-layer perceptron layer
prob_keep_dropout – Dropout rate for multi-layer perceptron
batch_size – Batch size
l_w – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: VNPR: meta: save_recs: True lr: 0.001 epochs: 50 mf_factors: 10 mlp_hidden_size: (32, 1) prob_keep_dropout: 0.2 batch_size: 64 l_w: 0.001
AMR¶
-
class
elliot.recommender.adversarial.AMR.
AMR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Adversarial Multimedia Recommender
For further details, please refer to the paper
- Parameters
factors – Number of latent factor
factors_d – Image-feature dimensionality
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
l_e – Regularization coefficient of image matrix embedding
eps – Perturbation Budget
l_adv – Adversarial regularization coefficient
adversarial_epochs – Adversarial epochs
To include the recommendation model, add it to the config file adopting the following pattern:
models: AMR: meta: save_recs: True epochs: 10 factors: 200 factors_d: 20 lr: 0.001 l_w: 0.1 l_b: 0.001 l_e: 0.1 eps: 0.1 l_adv: 0.001 adversarial_epochs: 5
Metrics¶
Elliot provides 36 evaluation metrics, partitioned into seven families: Accuracy, Rating-Error, Coverage, Novelty, Diversity, Bias, and Fairness. It is worth mentioning that Elliot is the framework that exposes both the largest number of metrics and the only one considering bias and fairness measures. Moreover, the user can choose any metric to drive the model selection and the tuning.
All the metrics inherit from a common abstract class:
|
This class represents the implementation of the Precision recommendation metric. |
Accuracy
|
Area Under the Curve |
|
Group Area Under the Curve |
|
Limited Area Under the Curve |
|
Sørensen–Dice coefficient |
|
F-Measure |
Extended F-Measure |
|
Hit Rate |
|
|
Mean Average Precision |
|
Mean Average Recall |
|
Mean Reciprocal Rank |
|
normalized Discounted Cumulative Gain |
Bias
|
Average coverage of long tail items |
|
Average percentage of long tail items |
|
Average Recommendation Popularity |
|
Popularity-based Ranking-based Equal Opportunity |
Extended Popularity-based Ranking-based Equal Opportunity |
|
|
Popularity-based Ranking-based Statistical Parity |
Extended Popularity-based Ranking-based Statistical Parity |
Coverage
Item Coverage |
|
Number of Recommendations Retrieved |
|
User Coverage |
|
|
User Coverage on Top-N rec. |
Diversity
Gini Index |
|
Shannon Entropy |
|
Subtopic Recall |
Fairness
Bias Disparity - Standard |
|
Bias Disparity - Bias Recommendations |
|
Bias Disparity - Bias Source |
|
Item MAD Ranking-based |
|
Item MAD Rating-based |
|
User MAD Ranking-based |
|
User MAD Rating-based |
|
|
Ranking-based Equal Opportunity |
|
Ranking-based Statistical Parity |
Novelty
|
Expected Free Discovery (EFD) |
Extended EFD |
|
|
Expected Popularity Complement (EPC) |
Extended EPC |
Rating
|
Mean Absolute Error |
|
Mean Squared Error |
|
Root Mean Squared Error |
Metrics Summary¶
Accuracy¶
Elliot integrates the following accuracy metrics.
Summary¶
|
Area Under the Curve |
|
Group Area Under the Curve |
|
Limited Area Under the Curve |
|
Sørensen–Dice coefficient |
|
F-Measure |
|
Extended F-Measure |
|
Hit Rate |
|
Mean Average Precision |
|
Mean Average Recall |
|
Mean Reciprocal Rank |
|
normalized Discounted Cumulative Gain |
Precision-measure |
|
|
Recall-measure |
AUC¶
-
class
elliot.evaluation.metrics.accuracy.AUC.auc.
AUC
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Area Under the Curve
This class represents the implementation of the global AUC recommendation metric. Passing ‘AUC’ to the metrics list will enable the computation of the metric.
For further details, please refer to the AUC
Note
This metric does not calculate group-based AUC which considers the AUC scores averaged across users. It is also not limited to k. Instead, it calculates the scores on the entire prediction results regardless the users.
\[\mathrm {AUC} = \frac{\sum\limits_{i=1}^M rank_{i} - \frac {{M} \times {(M+1)}}{2}} {{{M} \times {N}}}\]\(M\) is the number of positive samples.
\(N\) is the number of negative samples.
\(rank_i\) is the ascending rank of the ith positive sample.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [AUC]
GAUC¶
-
class
elliot.evaluation.metrics.accuracy.AUC.gauc.
GAUC
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Group Area Under the Curve
This class represents the implementation of the GroupAUC recommendation metric. Passing ‘GAUC’ to the metrics list will enable the computation of the metric.
“Deep Interest Network for Click-Through Rate Prediction” KDD ‘18 by Zhou, et al.
For further details, please refer to the paper
Note
It calculates the AUC score of each user, and finally obtains GAUC by weighting the user AUC. It is also not limited to k. Due to our padding for scores_tensor in RankEvaluator with -np.inf, the padding value will influence the ranks of origin items. Therefore, we use descending sort here and make an identity transformation to the formula of AUC, which is shown in auc_ function. For readability, we didn’t do simplification in the code.
\[\mathrm {GAUC} = \frac {{{M} \times {(M+N+1)} - \frac{M \times (M+1)}{2}} - \sum\limits_{i=1}^M rank_{i}} {{M} \times {N}}\]\(M\) is the number of positive samples.
\(N\) is the number of negative samples.
\(rank_i\) is the descending rank of the ith positive sample.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [GAUC]
LAUC¶
-
class
elliot.evaluation.metrics.accuracy.AUC.lauc.
LAUC
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Limited Area Under the Curve
This class represents the implementation of the Limited AUC recommendation metric. Passing ‘LAUC’ to the metrics list will enable the computation of the metric.
“Setting Goals and Choosing Metrics for Recommender System Evaluations” by Gunnar Schröder, et al.
For further details, please refer to the paper
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [LAUC]
DSC¶
-
class
elliot.evaluation.metrics.accuracy.DSC.dsc.
DSC
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Sørensen–Dice coefficient
This class represents the implementation of the Sørensen–Dice coefficient recommendation metric. Passing ‘DSC’ to the metrics list will enable the computation of the metric.
For further details, please refer to the page
\[\mathrm {DSC@K} = \frac{1+\beta^{2}}{\frac{1}{\text { metric_0@k }}+\frac{\beta^{2}}{\text { metric_1@k }}}\]- Parameters
beta – the beta coefficient (default: 1)
metric_0 – First considered metric (default: Precision)
metric_1 – Second considered metric (default: Recall)
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: DSC beta: 1 metric_0: Precision metric_1: Recall
F1¶
-
class
elliot.evaluation.metrics.accuracy.f1.f1.
F1
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
F-Measure
This class represents the implementation of the F-score recommendation metric. Passing ‘F1’ to the metrics list will enable the computation of the metric.
For further details, please refer to the paper
\[\mathrm {F1@K} = \frac{1+\beta^{2}}{\frac{1}{\text { precision@k }}+\frac{\beta^{2}}{\text { recall@k }}}\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [F1]
Extended F1¶
-
class
elliot.evaluation.metrics.accuracy.f1.extended_f1.
ExtendedF1
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Extended F-Measure
This class represents the implementation of the F-score recommendation metric. Passing ‘ExtendedF1’ to the metrics list will enable the computation of the metric.
“Evaluating Recommender Systems” Gunawardana, Asela and Shani, Guy, In Recommender systems handbook pages 265–308, 2015
For further details, please refer to the paper
\[\mathrm {ExtendedF1@K} =\frac{2}{\frac{1}{\text { metric_0@k }}+\frac{1}{\text { metric_1@k }}}\]- Parameters
metric_0 – First considered metric (default: Precision)
metric_1 – Second considered metric (default: Recall)
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ExtendedF1 metric_0: Precision metric_1: Recall
HR¶
-
class
elliot.evaluation.metrics.accuracy.hit_rate.hit_rate.
HR
(recommendations: Dict[int, List[Tuple[int, float]]], config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Hit Rate
This class represents the implementation of the Hit Rate recommendation metric. Passing ‘HR’ to the metrics list will enable the computation of the metric.
For further details, please refer to the link
\[\mathrm {HR@K} =\frac{Number \space of \space Hits @K}{|GT|}\]\(HR\) is the number of users with a positive sample in the recommendation list.
\(GT\) is the total number of samples in the test set.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [HR]
MAP¶
-
class
elliot.evaluation.metrics.accuracy.map.map.
MAP
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Mean Average Precision
This class represents the implementation of the Mean Average Precision recommendation metric. Passing ‘MAP’ to the metrics list will enable the computation of the metric.
For further details, please refer to the link
Note
In this case the normalization factor used is \(\frac{1}{\min (m,N)}\), which prevents your AP score from being unfairly suppressed when your number of recommendations couldn’t possibly capture all the correct ones.
\[\begin{split}\begin{align*} \mathrm{AP@N} &= \frac{1}{\mathrm{min}(m,N)}\sum_{k=1}^N P(k) \cdot rel(k) \\ \mathrm{MAP@N}& = \frac{1}{|U|}\sum_{u=1}^{|U|}(\mathrm{AP@N})_u \end{align*}\end{split}\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [MAP]
MAR¶
-
class
elliot.evaluation.metrics.accuracy.mar.mar.
MAR
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Mean Average Recall
This class represents the implementation of the Mean Average Recall recommendation metric. Passing ‘MAR’ to the metrics list will enable the computation of the metric.
For further details, please refer to the link
\[\begin{split}\begin{align*} \mathrm{Recall@N} &= \frac{1}{\mathrm{min}(m,|rel(k)|)}\sum_{k=1}^N P(k) \cdot rel(k) \\ \mathrm{MAR@N}& = \frac{1}{|U|}\sum_{u=1}^{|U|}(\mathrm{Recall@N})_u \end{align*}\end{split}\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [MAR]
MRR¶
-
class
elliot.evaluation.metrics.accuracy.mrr.mrr.
MRR
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Mean Reciprocal Rank
This class represents the implementation of the Mean Reciprocal Rank recommendation metric. Passing ‘MRR’ to the metrics list will enable the computation of the metric.
For further details, please refer to the link
\[\mathrm {MRR} = \frac{1}{|{U}|} \sum_{i=1}^{|{U}|} \frac{1}{rank_i}\]\(U\) is the number of users, \(rank_i\) is the rank of the first item in the recommendation list in the test set results for user \(i\).
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [MRR]
nDCG¶
-
class
elliot.evaluation.metrics.accuracy.ndcg.ndcg.
NDCG
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
normalized Discounted Cumulative Gain
This class represents the implementation of the nDCG recommendation metric.
For further details, please refer to the link
\[\begin{split}\begin{gather} \mathrm {DCG@K}=\sum_{i=1}^{K} \frac{2^{rel_i}-1}{\log_{2}{(i+1)}}\\ \mathrm {IDCG@K}=\sum_{i=1}^{K}\frac{1}{\log_{2}{(i+1)}}\\ \mathrm {NDCG_u@K}=\frac{DCG_u@K}{IDCG_u@K}\\ \mathrm {NDCG@K}=\frac{\sum \nolimits_{u \in u^{te}NDCG_u@K}}{|u^{te}|} \end{gather}\end{split}\]\(K\) stands for recommending \(K\) items.
And the \(rel_i\) is the relevance of the item in position \(i\) in the recommendation list.
\(2^{rel_i}\) equals to 1 if the item hits otherwise 0.
\(U^{te}\) is for all users in the test set.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [nDCG]
Precision¶
-
class
elliot.evaluation.metrics.accuracy.precision.precision.
Precision
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Precision-measure
This class represents the implementation of the Precision recommendation metric.
For further details, please refer to the link
\[\mathrm {Precision@K} = \frac{|Rel_u \cap Rec_u|}{Rec_u}\]\(Rel_u\) is the set of items relevant to user \(U\),
\(Rec_u\) is the top K items recommended to users.
We obtain the result by calculating the average \(Precision@K\) of each user.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [Precision]
Recall¶
-
class
elliot.evaluation.metrics.accuracy.recall.recall.
Recall
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Recall-measure
This class represents the implementation of the Recall recommendation metric.
For further details, please refer to the link
\[\mathrm {Recall@K} = \frac{|Rel_u\cap Rec_u|}{Rel_u}\]\(Rel_u\) is the set of items relevant to user \(U\),
\(Rec_u\) is the top K items recommended to users.
We obtain the result by calculating the average \(Recall@K\) of each user.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [Recall]
Rating¶
Elliot integrates the following ratings-based error metrics.
Summary¶
|
Mean Absolute Error |
|
Mean Squared Error |
|
Root Mean Squared Error |
MAE¶
-
class
elliot.evaluation.metrics.rating.mae.mae.
MAE
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Mean Absolute Error
This class represents the implementation of the Mean Absolute Error recommendation metric.
For further details, please refer to the link
\[\mathrm{MAE}=\frac{1}{|{T}|} \sum_{(u, i) \in {T}}\left|\hat{r}_{u i}-r_{u i}\right|\]\(T\) is the test set, \(\hat{r}_{u i}\) is the score predicted by the model,
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [MAE]
MSE¶
-
class
elliot.evaluation.metrics.rating.mse.mse.
MSE
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Mean Squared Error
This class represents the implementation of the Mean Squared Error recommendation metric.
For further details, please refer to the link
\[\mathrm{MSE} = \frac{1}{|{T}|} \sum_{(u, i) \in {T}}(\hat{r}_{u i}-r_{u i})^{2}\]\(T\) is the test set, \(\hat{r}_{u i}\) is the score predicted by the model
\(r_{u i}\) the actual score of the test set.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [MSE]
RMSE¶
-
class
elliot.evaluation.metrics.rating.rmse.rmse.
RMSE
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Root Mean Squared Error
This class represents the implementation of the Root Mean Squared Error recommendation metric.
For further details, please refer to the link
\[\mathrm{RMSE} = \sqrt{\frac{1}{|{T}|} \sum_{(u, i) \in {T}}(\hat{r}_{u i}-r_{u i})^{2}}\]\(T\) is the test set, \(\hat{r}_{u i}\) is the score predicted by the model
\(r_{u i}\) the actual score of the test set.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [RMSE]
Coverage¶
Elliot integrates the following coverage metrics.
Summary¶
Item Coverage |
|
Number of Recommendations Retrieved |
|
User Coverage |
|
User Coverage on Top-N rec. |
Item Coverage¶
-
class
elliot.evaluation.metrics.coverage.item_coverage.item_coverage.
ItemCoverage
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Item Coverage
This class represents the implementation of the Item Coverage recommendation metric.
For further details, please refer to the book
Note
The simplest measure of catalog coverage is the percentage of all items that can ever be recommended. This measure can be computed in many cases directly given the algorithm and the input data set.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [ItemCoverage]
Number of Recommendations Retrieved¶
-
class
elliot.evaluation.metrics.coverage.num_retrieved.num_retrieved.
NumRetrieved
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Number of Recommendations Retrieved
This class represents the implementation of the Number of Recommendations Retrieved recommendation metric.
For further details, please refer to the link
simple_metrics: [NumRetrieved]
User Coverage¶
-
class
elliot.evaluation.metrics.coverage.user_coverage.user_coverage.
UserCoverage
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
User Coverage
This class represents the implementation of the User Coverage recommendation metric.
For further details, please refer to the book
Note
The proportion of users or user interactions for which the system can recommend items. In many applications the recommender may not provide recommendations for some users due to, e.g. low confidence in the accuracy of predictions for that user.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [UserCoverage]
User Coverage At N¶
-
class
elliot.evaluation.metrics.coverage.user_coverage.user_coverage_at_n.
UserCoverageAtN
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
User Coverage on Top-N rec. Lists
This class represents the implementation of the User Coverage recommendation metric.
For further details, please refer to the book
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [UserCoverageAtN]
Novelty¶
Elliot integrates the following novelty metrics.
Summary¶
|
Expected Free Discovery (EFD) |
Extended EFD |
|
|
Expected Popularity Complement (EPC) |
Extended EPC |
EFD¶
-
class
elliot.evaluation.metrics.novelty.EFD.efd.
EFD
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Expected Free Discovery (EFD)
This class represents the implementation of the Expected Free Discovery recommendation metric.
For further details, please refer to the paper
Note
EFD can be read as the expected ICF of seen recommended items
\[\mathrm {EFD}=C \sum_{i_{k} \in R} {disc}(k) p({rel} \mid i_{k}, u)( -\log _{2} p(i \mid {seen}, \theta))\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [EFD]
Extended EFD¶
-
class
elliot.evaluation.metrics.novelty.EFD.extended_efd.
ExtendedEFD
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Extended EFD
This class represents the implementation of the Extended Expected Free Discovery recommendation metric.
For further details, please refer to the paper
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ExtendedEFD
EPC¶
-
class
elliot.evaluation.metrics.novelty.EPC.epc.
EPC
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Expected Popularity Complement (EPC)
This class represents the implementation of the Expected Popularity Complement recommendation metric.
For further details, please refer to the paper
Note
EPC can be read as the expected number of seen relevant recommended items not previously seen
\[\mathrm{EPC}=C \sum_{i_{k} \in R} \operatorname{disc}(k) p\left(r e l \mid i_{k}, u\right)\left(1-p\left(\operatorname{seen} \mid t_{k}\right)\right)\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [EPC]
Extended EPC¶
-
class
elliot.evaluation.metrics.novelty.EPC.extended_epc.
ExtendedEPC
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Extended EPC
This class represents the implementation of the Extended EPC recommendation metric.
For further details, please refer to the paper
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ExtendedEPC
Diversity¶
Elliot integrates the following diversity metrics.
Summary¶
Gini Index |
|
Shannon Entropy |
|
|
Subtopic Recall |
Gini Index¶
-
class
elliot.evaluation.metrics.diversity.gini_index.gini_index.
GiniIndex
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Gini Index
This class represents the implementation of the Gini Index recommendation metric.
For further details, please refer to the book
\[\mathrm {GiniIndex}=\frac{1}{n-1} \sum_{j=1}^{n}(2 j-n-1) p\left(i_{j}\right)\]\(i_{j}\) is the list of items ordered according to increasing p(i)
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [Gini]
Shannon Entropy¶
-
class
elliot.evaluation.metrics.diversity.shannon_entropy.shannon_entropy.
ShannonEntropy
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Shannon Entropy
This class represents the implementation of the Shannon Entropy recommendation metric.
For further details, please refer to the book
\[\mathrm {ShannonEntropy}=-\sum_{i=1}^{n} p(i) \log p(i)\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [SEntropy]
SRecall¶
-
class
elliot.evaluation.metrics.diversity.SRecall.srecall.
SRecall
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Subtopic Recall
This class represents the implementation of the Subtopic Recall (S-Recall) recommendation metric.
For further details, please refer to the paper
\[\mathrm {SRecall}=\frac{\left|\cup_{i=1}^{K} {subtopics}\left(d_{i}\right)\right|}{n_{A}}\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [SRecall]
Bias¶
Elliot integrates the following bias metrics.
Summary¶
|
Average coverage of long tail items |
|
Average percentage of long tail items |
|
Average Recommendation Popularity |
|
Popularity-based Ranking-based Equal Opportunity |
Extended Popularity-based Ranking-based Equal Opportunity |
|
|
Popularity-based Ranking-based Statistical Parity |
Extended Popularity-based Ranking-based Statistical Parity |
ACLT¶
-
class
elliot.evaluation.metrics.bias.aclt.aclt.
ACLT
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Average coverage of long tail items
This class represents the implementation of the Average coverage of long tail items recommendation metric.
For further details, please refer to the paper
\[\mathrm {ACLT}=\frac{1}{\left|U_{t}\right|} \sum_{u \in U_{f}} \sum_{i \in L_{u}} 1(i \in \Gamma)\]\(U_{t}\) is the number of users in the test set.
\(L_{u}\) is the recommended list of items for user u.
\(1(i \in \Gamma)\) is an indicator function and it equals to 1 when i is in Gamma.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [ACLT]
APLT¶
-
class
elliot.evaluation.metrics.bias.aplt.aplt.
APLT
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Average percentage of long tail items
This class represents the implementation of the Average percentage of long tail items recommendation metric.
For further details, please refer to the paper
\[\mathrm {ACLT}=\frac{1}{\left|U_{t}\right|} \sum_{u \in U_{t}} \frac{|\{i, i \in(L(u) \cap \sim \Phi)\}|}{|L(u)|}\]\(U_{t}\) is the number of users in the test set.
\(L_{u}\) is the recommended list of items for user u.
\(\sim \Phi\) medium-tail items.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [APLT]
ARP¶
-
class
elliot.evaluation.metrics.bias.arp.arp.
ARP
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Average Recommendation Popularity
This class represents the implementation of the Average Recommendation Popularity recommendation metric.
For further details, please refer to the paper
\[\mathrm {ARP}=\frac{1}{\left|U_{t}\right|} \sum_{u \in U_{t}} \frac{\sum_{i \in L_{u}} \phi(i)}{\left|L_{u}\right|}\]\(U_{t}\) is the number of users in the test set.
\(L_{u}\) is the recommended list of items for user u.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [ARP]
PopREO¶
-
class
elliot.evaluation.metrics.bias.pop_reo.pop_reo.
PopREO
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Popularity-based Ranking-based Equal Opportunity
This class represents the implementation of the Popularity-based Ranking-based Equal Opportunity (REO) recommendation metric.
For further details, please refer to the paper
\[\mathrm {REO}=\frac{{std}\left(P\left(R @ k \mid g=g_{1}, y=1\right) \ldots P\left(R(a) k=g_{A}, y=1\right)\right)} {{mean}\left(P\left(R @ k \mid g=g_{1}, y=1\right) \ldots P\left(R @ k \mid g=g_{A}, y=1\right)\right)}\]\(P\left(R @ k \mid g=g_{a}, y=1\right) = \frac{\sum_{u=1}^{N} \sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right) Y\left(u, R_{u, i}\right)} {\sum_{u=1}^{N} \sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i) Y(u, i)}\)
\(Y\left(u, R_{u, i}\right)\) identifies the ground-truth label of a user-item pair left(u, R_{u, i}right), if item R_{u, i} is liked by user 𝑢, returns 1, otherwise 0
\(\sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right) Y\left(u, R_{u, i}\right)\) counts how many items in test set from group {g_a} are ranked in top-𝑘 for user u
\(\sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i) Y(u, i)\) counts the total number of items from group {g_a} 𝑎 in test set for user u
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [PopREO]
Extended PopREO¶
-
class
elliot.evaluation.metrics.bias.pop_reo.extended_pop_reo.
ExtendedPopREO
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Extended Popularity-based Ranking-based Equal Opportunity
This class represents the implementation of the Extended Popularity-based Ranking-based Equal Opportunity (REO) recommendation metric.
For further details, please refer to the paper
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ExtendedPopREO
PopRSP¶
-
class
elliot.evaluation.metrics.bias.pop_rsp.pop_rsp.
PopRSP
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Popularity-based Ranking-based Statistical Parity
This class represents the implementation of the Popularity-based Ranking-based Statistical Parity (RSP) recommendation metric.
For further details, please refer to the paper
\[\mathrm {RSP}=\frac{{std}\left(P\left(R @ k \mid g=g_{1}\right), \ldots, P\left(R @ k \mid g=g_{A}\right)\right)} {{mean}\left(P\left(R @ k \mid g=g_{1}\right), \ldots, P\left(R @ k \mid g=g_{A}\right)\right)}\]:math P(R @ k mid g=g_{A})) = frac{sum_{u=1}^{N} sum_{i=1}^{k} G_{g_{a}}(R_{u, i})} {sum_{u=1}^{N} sum_{i in I backslash I_{u}^{+}} G_{g_{a}}(i)}
\(\sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right)\) calculates how many un-interacted items from group {g_a} are ranked in top-𝑘 for user u.
\(\sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i)\) calculates how many un-interacted items belong to group {g_a} for u
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [PopRSP]
Extended PopRSP¶
-
class
elliot.evaluation.metrics.bias.pop_rsp.extended_pop_rsp.
ExtendedPopRSP
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Extended Popularity-based Ranking-based Statistical Parity
This class represents the implementation of the Extended Popularity-based Ranking-based Statistical Parity (RSP) recommendation metric.
For further details, please refer to the paper
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ExtendedPopRSP
Fairness¶
Elliot integrates the following fairness metrics.
Summary¶
Bias Disparity - Standard |
|
Bias Disparity - Bias Recommendations |
|
Bias Disparity - Bias Source |
|
Item MAD Ranking-based |
|
Item MAD Rating-based |
|
User MAD Ranking-based |
|
User MAD Rating-based |
|
|
Ranking-based Equal Opportunity |
|
Ranking-based Statistical Parity |
BiasDisparity BD¶
-
class
elliot.evaluation.metrics.fairness.BiasDisparity.BiasDisparityBD.
BiasDisparityBD
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Bias Disparity - Standard
This class represents the implementation of the Bias Disparity recommendation metric.
For further details, please refer to the paper
\[\mathrm {BD(G, C)}=\frac{B_{R}(G, C)-B_{S}(G, C)}{B_{S}(G, C)}\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: BiasDisparityBD user_clustering_name: Happiness user_clustering_file: ../data/movielens_1m/u_happy.tsv item_clustering_name: ItemPopularity item_clustering_file: ../data/movielens_1m/i_pop.tsv
BiasDisparity BR¶
-
class
elliot.evaluation.metrics.fairness.BiasDisparity.BiasDisparityBR.
BiasDisparityBR
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Bias Disparity - Bias Recommendations
This class represents the implementation of the Bias Disparity - Bias Recommendations recommendation metric.
For further details, please refer to the paper
\[\mathrm {BD(G, C)}=\frac{B_{R}(G, C)-B_{S}(G, C)}{B_{S}(G, C)}\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: BiasDisparityBR user_clustering_name: Happiness user_clustering_file: ../data/movielens_1m/u_happy.tsv item_clustering_name: ItemPopularity item_clustering_file: ../data/movielens_1m/i_pop.tsv
BiasDisparity BS¶
-
class
elliot.evaluation.metrics.fairness.BiasDisparity.BiasDisparityBS.
BiasDisparityBS
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Bias Disparity - Bias Source
This class represents the implementation of the Bias Disparity - Bias Source recommendation metric.
For further details, please refer to the paper
\[\mathrm {B_{S}(G, C)}=\frac{P R_{S}(G, C)}{P(C)}\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: BiasDisparityBS user_clustering_name: Happiness user_clustering_file: ../data/movielens_1m/u_happy.tsv item_clustering_name: ItemPopularity item_clustering_file: ../data/movielens_1m/i_pop.tsv
ItemMADranking¶
-
class
elliot.evaluation.metrics.fairness.MAD.ItemMADranking.
ItemMADranking
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Item MAD Ranking-based
This class represents the implementation of the Item MAD ranking recommendation metric.
For further details, please refer to the paper
\[\mathrm {MAD}={avg}_{i, j}({MAD}(R^{(i)}, R^{(j)}))\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ItemMADranking clustering_name: ItemPopularity clustering_file: ../data/movielens_1m/i_pop.tsv
ItemMADrating¶
-
class
elliot.evaluation.metrics.fairness.MAD.ItemMADrating.
ItemMADrating
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Item MAD Rating-based
This class represents the implementation of the Item MAD rating recommendation metric.
For further details, please refer to the paper
\[\mathrm {MAD}={avg}_{i, j}({MAD}(R^{(i)}, R^{(j)}))\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ItemMADrating clustering_name: ItemPopularity clustering_file: ../data/movielens_1m/i_pop.tsv
UserMADranking¶
-
class
elliot.evaluation.metrics.fairness.MAD.UserMADranking.
UserMADranking
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
User MAD Ranking-based
This class represents the implementation of the User MAD ranking recommendation metric.
For further details, please refer to the paper
\[\mathrm {MAD}={avg}_{i, j}({MAD}(R^{(i)}, R^{(j)}))\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: UserMADranking clustering_name: Happiness clustering_file: ../data/movielens_1m/u_happy.tsv
UserMADrating¶
-
class
elliot.evaluation.metrics.fairness.MAD.UserMADrating.
UserMADrating
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
User MAD Rating-based
This class represents the implementation of the User MAD rating recommendation metric.
For further details, please refer to the paper
\[\mathrm {MAD}={avg}_{i, j}({MAD}(R^{(i)}, R^{(j)}))\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: UserMADrating clustering_name: Happiness clustering_file: ../data/movielens_1m/u_happy.tsv
REO¶
-
class
elliot.evaluation.metrics.fairness.reo.reo.
REO
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Ranking-based Equal Opportunity
This class represents the implementation of the Ranking-based Equal Opportunity (REO) recommendation metric.
For further details, please refer to the paper
\[\mathrm {REO}=\frac{{std}\left(P\left(R @ k \mid g=g_{1}, y=1\right) \ldots P\left(R(a) k=g_{A}, y=1\right)\right)} {{mean}\left(P\left(R @ k \mid g=g_{1}, y=1\right) \ldots P\left(R @ k \mid g=g_{A}, y=1\right)\right)}\]\(P\left(R @ k \mid g=g_{a}, y=1\right) = \frac{\sum_{u=1}^{N} \sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right) Y\left(u, R_{u, i}\right)} {\sum_{u=1}^{N} \sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i) Y(u, i)}\)
\(Y\left(u, R_{u, i}\right)\) identifies the ground-truth label of a user-item pair left(u, R_{u, i}right), if item R_{u, i} is liked by user 𝑢, returns 1, otherwise 0
\(\sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right) Y\left(u, R_{u, i}\right)\) counts how many items in test set from group {g_a} are ranked in top-𝑘 for user u
\(\sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i) Y(u, i)\) counts the total number of items from group {g_a} 𝑎 in test set for user u
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: REO clustering_name: ItemPopularity clustering_file: ../data/movielens_1m/i_pop.tsv
RSP¶
-
class
elliot.evaluation.metrics.fairness.rsp.rsp.
RSP
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Ranking-based Statistical Parity
This class represents the implementation of the Ranking-based Statistical Parity (RSP) recommendation metric.
For further details, please refer to the paper
\[\mathrm {RSP}=\frac{{std}(P(R @ k \mid g=g_{1}), \ldots, P(R @ k \mid g=g_{A}))} {{mean}(P(R @ k \mid g=g_{1}), \ldots, P(R @ k \mid g=g_{A}))}\]\(P(R @ k \mid g=g_{A})) = \frac{\sum_{u=1}^{N} \sum_{i=1}^{k} G_{g_{a}}(R_{u, i})} {\sum_{u=1}^{N} \sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i)}\)
\(\sum_{i=1}^{k} G_{g_{a}}(R_{u, i})\) calculates how many un-interacted items from group {g_a} are ranked in top-𝑘 for user u.
\(\sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i)\) calculates how many un-interacted items belong to group {g_a} for u
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: RSP clustering_name: ItemPopularity clustering_file: ../data/movielens_1m/i_pop.tsv
elliot package¶
Subpackages¶
elliot.dataset package¶
Subpackages¶
elliot.dataset.dataloader package¶
Module description:
-
class
elliot.dataset.dataloader.knowledge_aware_chains.
KnowledgeChainsDataObject
(config, data_tuple, side_information_data, *args, **kwargs)[source]¶ Bases:
object
Load train and test dataset
-
class
elliot.dataset.dataloader.knowledge_aware_chains.
KnowledgeChainsLoader
(config, *args, **kwargs)[source]¶ Bases:
object
Load train and test dataset
-
load_dataset_dataframe
(file_ratings, separator='\t', attribute_file=None, feature_file=None, properties_file=None, column_names=['userId', 'itemId', 'rating', 'timestamp'], additive=True, threshold=10)[source]¶
-
load_dataset_dict
(file_ratings, separator='\t', attribute_file=None, feature_file=None, properties_file=None, additive=True, threshold=10)[source]¶
-
Module description:
-
class
elliot.dataset.dataloader.visual_dataloader.
VisualDataObject
(config, data_tuple, side_information_data, *args, **kwargs)[source]¶ Bases:
object
Load train and test dataset
elliot.dataset.samplers package¶
Module description:
Module description:
Module description:
Module description:
Module description:
Module description:
Module description:
Module description:
Module description:
Module description:
Module description:
Module description:
Submodules¶
elliot.dataset.abstract_dataset module¶
-
class
elliot.dataset.abstract_dataset.
AbstractDataset
(*args, **kwargs)[source]¶ Bases:
object
-
required_attributes
= ['config', 'args', 'kwargs', 'users', 'items', 'num_users', 'num_items', 'private_users', 'public_users', 'private_items', 'public_items', 'transactions', 'train_dict', 'i_train_dict', 'sp_i_train', 'test_dict']¶
-
elliot.dataset.dataset module¶
Module description:
-
class
elliot.dataset.dataset.
DataSet
(*args, **kwargs)[source]¶ Bases:
elliot.dataset.abstract_dataset.AbstractDataset
Load train and test dataset
Module contents¶
Module description:
elliot.evaluation package¶
Subpackages¶
elliot.evaluation.metrics package¶
This is the implementation of the global AUC metric. It proceeds from a system-wise computation.
-
class
elliot.evaluation.metrics.accuracy.AUC.auc.
AUC
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Area Under the Curve
This class represents the implementation of the global AUC recommendation metric. Passing ‘AUC’ to the metrics list will enable the computation of the metric.
For further details, please refer to the AUC
Note
This metric does not calculate group-based AUC which considers the AUC scores averaged across users. It is also not limited to k. Instead, it calculates the scores on the entire prediction results regardless the users.
\[\mathrm {AUC} = \frac{\sum\limits_{i=1}^M rank_{i} - \frac {{M} \times {(M+1)}}{2}} {{{M} \times {N}}}\]\(M\) is the number of positive samples.
\(N\) is the number of negative samples.
\(rank_i\) is the ascending rank of the ith positive sample.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [AUC]
This is the implementation of the GroupAUC metric. It proceeds from a user-wise computation, and average the AUC values over the users.
-
class
elliot.evaluation.metrics.accuracy.AUC.gauc.
GAUC
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Group Area Under the Curve
This class represents the implementation of the GroupAUC recommendation metric. Passing ‘GAUC’ to the metrics list will enable the computation of the metric.
“Deep Interest Network for Click-Through Rate Prediction” KDD ‘18 by Zhou, et al.
For further details, please refer to the paper
Note
It calculates the AUC score of each user, and finally obtains GAUC by weighting the user AUC. It is also not limited to k. Due to our padding for scores_tensor in RankEvaluator with -np.inf, the padding value will influence the ranks of origin items. Therefore, we use descending sort here and make an identity transformation to the formula of AUC, which is shown in auc_ function. For readability, we didn’t do simplification in the code.
\[\mathrm {GAUC} = \frac {{{M} \times {(M+N+1)} - \frac{M \times (M+1)}{2}} - \sum\limits_{i=1}^M rank_{i}} {{M} \times {N}}\]\(M\) is the number of positive samples.
\(N\) is the number of negative samples.
\(rank_i\) is the descending rank of the ith positive sample.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [GAUC]
This is the implementation of the Limited AUC metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.accuracy.AUC.lauc.
LAUC
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Limited Area Under the Curve
This class represents the implementation of the Limited AUC recommendation metric. Passing ‘LAUC’ to the metrics list will enable the computation of the metric.
“Setting Goals and Choosing Metrics for Recommender System Evaluations” by Gunnar Schröder, et al.
For further details, please refer to the paper
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [LAUC]
This is the implementation of the Sørensen–Dice coefficient metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.accuracy.DSC.dsc.
DSC
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Sørensen–Dice coefficient
This class represents the implementation of the Sørensen–Dice coefficient recommendation metric. Passing ‘DSC’ to the metrics list will enable the computation of the metric.
For further details, please refer to the page
\[\mathrm {DSC@K} = \frac{1+\beta^{2}}{\frac{1}{\text { metric_0@k }}+\frac{\beta^{2}}{\text { metric_1@k }}}\]- Parameters
beta – the beta coefficient (default: 1)
metric_0 – First considered metric (default: Precision)
metric_1 – Second considered metric (default: Recall)
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: DSC beta: 1 metric_0: Precision metric_1: Recall
This is the implementation of the F-score metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.accuracy.f1.extended_f1.
ExtendedF1
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Extended F-Measure
This class represents the implementation of the F-score recommendation metric. Passing ‘ExtendedF1’ to the metrics list will enable the computation of the metric.
“Evaluating Recommender Systems” Gunawardana, Asela and Shani, Guy, In Recommender systems handbook pages 265–308, 2015
For further details, please refer to the paper
\[\mathrm {ExtendedF1@K} =\frac{2}{\frac{1}{\text { metric_0@k }}+\frac{1}{\text { metric_1@k }}}\]- Parameters
metric_0 – First considered metric (default: Precision)
metric_1 – Second considered metric (default: Recall)
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ExtendedF1 metric_0: Precision metric_1: Recall
This is the implementation of the F-score metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.accuracy.f1.f1.
F1
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
F-Measure
This class represents the implementation of the F-score recommendation metric. Passing ‘F1’ to the metrics list will enable the computation of the metric.
For further details, please refer to the paper
\[\mathrm {F1@K} = \frac{1+\beta^{2}}{\frac{1}{\text { precision@k }}+\frac{\beta^{2}}{\text { recall@k }}}\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [F1]
This is the implementation of the Hit Rate metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.accuracy.hit_rate.hit_rate.
HR
(recommendations: Dict[int, List[Tuple[int, float]]], config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Hit Rate
This class represents the implementation of the Hit Rate recommendation metric. Passing ‘HR’ to the metrics list will enable the computation of the metric.
For further details, please refer to the link
\[\mathrm {HR@K} =\frac{Number \space of \space Hits @K}{|GT|}\]\(HR\) is the number of users with a positive sample in the recommendation list.
\(GT\) is the total number of samples in the test set.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [HR]
This is the implementation of the Mean Average Precision metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.accuracy.map.map.
MAP
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Mean Average Precision
This class represents the implementation of the Mean Average Precision recommendation metric. Passing ‘MAP’ to the metrics list will enable the computation of the metric.
For further details, please refer to the link
Note
In this case the normalization factor used is \(\frac{1}{\min (m,N)}\), which prevents your AP score from being unfairly suppressed when your number of recommendations couldn’t possibly capture all the correct ones.
\[\begin{split}\begin{align*} \mathrm{AP@N} &= \frac{1}{\mathrm{min}(m,N)}\sum_{k=1}^N P(k) \cdot rel(k) \\ \mathrm{MAP@N}& = \frac{1}{|U|}\sum_{u=1}^{|U|}(\mathrm{AP@N})_u \end{align*}\end{split}\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [MAP]
This is the implementation of the Mean Average Recall metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.accuracy.mar.mar.
MAR
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Mean Average Recall
This class represents the implementation of the Mean Average Recall recommendation metric. Passing ‘MAR’ to the metrics list will enable the computation of the metric.
For further details, please refer to the link
\[\begin{split}\begin{align*} \mathrm{Recall@N} &= \frac{1}{\mathrm{min}(m,|rel(k)|)}\sum_{k=1}^N P(k) \cdot rel(k) \\ \mathrm{MAR@N}& = \frac{1}{|U|}\sum_{u=1}^{|U|}(\mathrm{Recall@N})_u \end{align*}\end{split}\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [MAR]
This is the implementation of the Mean Reciprocal Rank metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.accuracy.mrr.mrr.
MRR
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Mean Reciprocal Rank
This class represents the implementation of the Mean Reciprocal Rank recommendation metric. Passing ‘MRR’ to the metrics list will enable the computation of the metric.
For further details, please refer to the link
\[\mathrm {MRR} = \frac{1}{|{U}|} \sum_{i=1}^{|{U}|} \frac{1}{rank_i}\]\(U\) is the number of users, \(rank_i\) is the rank of the first item in the recommendation list in the test set results for user \(i\).
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [MRR]
This is the implementation of the normalized Discounted Cumulative Gain metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.accuracy.ndcg.ndcg.
NDCG
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
normalized Discounted Cumulative Gain
This class represents the implementation of the nDCG recommendation metric.
For further details, please refer to the link
\[\begin{split}\begin{gather} \mathrm {DCG@K}=\sum_{i=1}^{K} \frac{2^{rel_i}-1}{\log_{2}{(i+1)}}\\ \mathrm {IDCG@K}=\sum_{i=1}^{K}\frac{1}{\log_{2}{(i+1)}}\\ \mathrm {NDCG_u@K}=\frac{DCG_u@K}{IDCG_u@K}\\ \mathrm {NDCG@K}=\frac{\sum \nolimits_{u \in u^{te}NDCG_u@K}}{|u^{te}|} \end{gather}\end{split}\]\(K\) stands for recommending \(K\) items.
And the \(rel_i\) is the relevance of the item in position \(i\) in the recommendation list.
\(2^{rel_i}\) equals to 1 if the item hits otherwise 0.
\(U^{te}\) is for all users in the test set.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [nDCG]
-
compute_idcg
(user, cutoff: int) → float[source]¶ Method to compute Ideal Discounted Cumulative Gain :param gain_map: :param cutoff: :return:
-
compute_user_ndcg
(user_recommendations: List, user, cutoff: int) → float[source]¶ Method to compute normalized Discounted Cumulative Gain :param sorted_item_predictions: :param gain_map: :param cutoff: :return:
-
This is the nDCG metric module.
This module contains and expose the recommendation metric.
This is the implementation of the Precision metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.accuracy.precision.precision.
Precision
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Precision-measure
This class represents the implementation of the Precision recommendation metric.
For further details, please refer to the link
\[\mathrm {Precision@K} = \frac{|Rel_u \cap Rec_u|}{Rec_u}\]\(Rel_u\) is the set of items relevant to user \(U\),
\(Rec_u\) is the top K items recommended to users.
We obtain the result by calculating the average \(Precision@K\) of each user.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [Precision]
This is the Precision metric module.
This module contains and expose the recommendation metric.
This is the implementation of the Recall metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.accuracy.recall.recall.
Recall
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Recall-measure
This class represents the implementation of the Recall recommendation metric.
For further details, please refer to the link
\[\mathrm {Recall@K} = \frac{|Rel_u\cap Rec_u|}{Rel_u}\]\(Rel_u\) is the set of items relevant to user \(U\),
\(Rec_u\) is the top K items recommended to users.
We obtain the result by calculating the average \(Recall@K\) of each user.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [Recall]
This is the Recall metric implementation.
This module contains and expose the recommendation metric.
This is the implementation of the Average coverage of long tail items metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.bias.aclt.aclt.
ACLT
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Average coverage of long tail items
This class represents the implementation of the Average coverage of long tail items recommendation metric.
For further details, please refer to the paper
\[\mathrm {ACLT}=\frac{1}{\left|U_{t}\right|} \sum_{u \in U_{f}} \sum_{i \in L_{u}} 1(i \in \Gamma)\]\(U_{t}\) is the number of users in the test set.
\(L_{u}\) is the recommended list of items for user u.
\(1(i \in \Gamma)\) is an indicator function and it equals to 1 when i is in Gamma.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [ACLT]
This is the implementation of the Average percentage of long tail items metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.bias.aplt.aplt.
APLT
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Average percentage of long tail items
This class represents the implementation of the Average percentage of long tail items recommendation metric.
For further details, please refer to the paper
\[\mathrm {ACLT}=\frac{1}{\left|U_{t}\right|} \sum_{u \in U_{t}} \frac{|\{i, i \in(L(u) \cap \sim \Phi)\}|}{|L(u)|}\]\(U_{t}\) is the number of users in the test set.
\(L_{u}\) is the recommended list of items for user u.
\(\sim \Phi\) medium-tail items.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [APLT]
This is the implementation of the Average Recommendation Popularity metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.bias.arp.arp.
ARP
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Average Recommendation Popularity
This class represents the implementation of the Average Recommendation Popularity recommendation metric.
For further details, please refer to the paper
\[\mathrm {ARP}=\frac{1}{\left|U_{t}\right|} \sum_{u \in U_{t}} \frac{\sum_{i \in L_{u}} \phi(i)}{\left|L_{u}\right|}\]\(U_{t}\) is the number of users in the test set.
\(L_{u}\) is the recommended list of items for user u.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [ARP]
This is the implementation of the Popularity-based Ranking-based Equal Opportunity (REO) metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.bias.pop_reo.extended_pop_reo.
ExtendedPopREO
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Extended Popularity-based Ranking-based Equal Opportunity
This class represents the implementation of the Extended Popularity-based Ranking-based Equal Opportunity (REO) recommendation metric.
For further details, please refer to the paper
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ExtendedPopREO
This is the implementation of the Popularity-based Ranking-based Equal Opportunity (REO) metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.bias.pop_reo.pop_reo.
PopREO
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Popularity-based Ranking-based Equal Opportunity
This class represents the implementation of the Popularity-based Ranking-based Equal Opportunity (REO) recommendation metric.
For further details, please refer to the paper
\[\mathrm {REO}=\frac{{std}\left(P\left(R @ k \mid g=g_{1}, y=1\right) \ldots P\left(R(a) k=g_{A}, y=1\right)\right)} {{mean}\left(P\left(R @ k \mid g=g_{1}, y=1\right) \ldots P\left(R @ k \mid g=g_{A}, y=1\right)\right)}\]\(P\left(R @ k \mid g=g_{a}, y=1\right) = \frac{\sum_{u=1}^{N} \sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right) Y\left(u, R_{u, i}\right)} {\sum_{u=1}^{N} \sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i) Y(u, i)}\)
\(Y\left(u, R_{u, i}\right)\) identifies the ground-truth label of a user-item pair left(u, R_{u, i}right), if item R_{u, i} is liked by user 𝑢, returns 1, otherwise 0
\(\sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right) Y\left(u, R_{u, i}\right)\) counts how many items in test set from group {g_a} are ranked in top-𝑘 for user u
\(\sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i) Y(u, i)\) counts the total number of items from group {g_a} 𝑎 in test set for user u
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [PopREO]
This is the implementation of the Popularity-based Ranking-based Statistical Parity (RSP) metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.bias.pop_rsp.extended_pop_rsp.
ExtendedPopRSP
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Extended Popularity-based Ranking-based Statistical Parity
This class represents the implementation of the Extended Popularity-based Ranking-based Statistical Parity (RSP) recommendation metric.
For further details, please refer to the paper
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ExtendedPopRSP
This is the implementation of the Popularity-based Ranking-based Statistical Parity (RSP) metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.bias.pop_rsp.pop_rsp.
PopRSP
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Popularity-based Ranking-based Statistical Parity
This class represents the implementation of the Popularity-based Ranking-based Statistical Parity (RSP) recommendation metric.
For further details, please refer to the paper
\[\mathrm {RSP}=\frac{{std}\left(P\left(R @ k \mid g=g_{1}\right), \ldots, P\left(R @ k \mid g=g_{A}\right)\right)} {{mean}\left(P\left(R @ k \mid g=g_{1}\right), \ldots, P\left(R @ k \mid g=g_{A}\right)\right)}\]:math P(R @ k mid g=g_{A})) = frac{sum_{u=1}^{N} sum_{i=1}^{k} G_{g_{a}}(R_{u, i})} {sum_{u=1}^{N} sum_{i in I backslash I_{u}^{+}} G_{g_{a}}(i)}
\(\sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right)\) calculates how many un-interacted items from group {g_a} are ranked in top-𝑘 for user u.
\(\sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i)\) calculates how many un-interacted items belong to group {g_a} for u
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [PopRSP]
This is the implementation of the Item Coverage metric. It directly proceeds from a system-wise computation, and it considers all the users at the same time.
-
class
elliot.evaluation.metrics.coverage.item_coverage.item_coverage.
ItemCoverage
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Item Coverage
This class represents the implementation of the Item Coverage recommendation metric.
For further details, please refer to the book
Note
The simplest measure of catalog coverage is the percentage of all items that can ever be recommended. This measure can be computed in many cases directly given the algorithm and the input data set.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [ItemCoverage]
This is the Item Coverage metric module.
This module contains and expose the recommendation metric.
This is the implementation of the NumRetrieved metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.coverage.num_retrieved.num_retrieved.
NumRetrieved
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Number of Recommendations Retrieved
This class represents the implementation of the Number of Recommendations Retrieved recommendation metric.
For further details, please refer to the link
simple_metrics: [NumRetrieved]
This is the implementation of the User Coverage metric. It directly proceeds from a system-wise computation, and it considers all the users at the same time.
-
class
elliot.evaluation.metrics.coverage.user_coverage.user_coverage.
UserCoverage
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
User Coverage
This class represents the implementation of the User Coverage recommendation metric.
For further details, please refer to the book
Note
The proportion of users or user interactions for which the system can recommend items. In many applications the recommender may not provide recommendations for some users due to, e.g. low confidence in the accuracy of predictions for that user.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [UserCoverage]
This is the implementation of the User Coverage metric. It directly proceeds from a system-wise computation, and it considers all the users at the same time.
-
class
elliot.evaluation.metrics.coverage.user_coverage.user_coverage_at_n.
UserCoverageAtN
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
User Coverage on Top-N rec. Lists
This class represents the implementation of the User Coverage recommendation metric.
For further details, please refer to the book
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [UserCoverageAtN]
This is the implementation of the SRecall metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.diversity.SRecall.srecall.
SRecall
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Subtopic Recall
This class represents the implementation of the Subtopic Recall (S-Recall) recommendation metric.
For further details, please refer to the paper
\[\mathrm {SRecall}=\frac{\left|\cup_{i=1}^{K} {subtopics}\left(d_{i}\right)\right|}{n_{A}}\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [SRecall]
This is the implementation of the Gini Index metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.diversity.gini_index.gini_index.
GiniIndex
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Gini Index
This class represents the implementation of the Gini Index recommendation metric.
For further details, please refer to the book
\[\mathrm {GiniIndex}=\frac{1}{n-1} \sum_{j=1}^{n}(2 j-n-1) p\left(i_{j}\right)\]\(i_{j}\) is the list of items ordered according to increasing p(i)
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [Gini]
This is the implementation of the Shannon Entropy metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.diversity.shannon_entropy.shannon_entropy.
ShannonEntropy
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Shannon Entropy
This class represents the implementation of the Shannon Entropy recommendation metric.
For further details, please refer to the book
\[\mathrm {ShannonEntropy}=-\sum_{i=1}^{n} p(i) \log p(i)\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [SEntropy]
This is the implementation of the Bias Disparity metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.fairness.BiasDisparity.BiasDisparityBD.
BiasDisparityBD
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Bias Disparity - Standard
This class represents the implementation of the Bias Disparity recommendation metric.
For further details, please refer to the paper
\[\mathrm {BD(G, C)}=\frac{B_{R}(G, C)-B_{S}(G, C)}{B_{S}(G, C)}\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: BiasDisparityBD user_clustering_name: Happiness user_clustering_file: ../data/movielens_1m/u_happy.tsv item_clustering_name: ItemPopularity item_clustering_file: ../data/movielens_1m/i_pop.tsv
This is the implementation of the Bias Disparity - Bias Recommendations metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.fairness.BiasDisparity.BiasDisparityBR.
BiasDisparityBR
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Bias Disparity - Bias Recommendations
This class represents the implementation of the Bias Disparity - Bias Recommendations recommendation metric.
For further details, please refer to the paper
\[\mathrm {BD(G, C)}=\frac{B_{R}(G, C)-B_{S}(G, C)}{B_{S}(G, C)}\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: BiasDisparityBR user_clustering_name: Happiness user_clustering_file: ../data/movielens_1m/u_happy.tsv item_clustering_name: ItemPopularity item_clustering_file: ../data/movielens_1m/i_pop.tsv
This is the implementation of the Bias Disparity - Bias Source metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.fairness.BiasDisparity.BiasDisparityBS.
BiasDisparityBS
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Bias Disparity - Bias Source
This class represents the implementation of the Bias Disparity - Bias Source recommendation metric.
For further details, please refer to the paper
\[\mathrm {B_{S}(G, C)}=\frac{P R_{S}(G, C)}{P(C)}\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: BiasDisparityBS user_clustering_name: Happiness user_clustering_file: ../data/movielens_1m/u_happy.tsv item_clustering_name: ItemPopularity item_clustering_file: ../data/movielens_1m/i_pop.tsv
This is the implementation of the Item MAD ranking metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.fairness.MAD.ItemMADranking.
ItemMADranking
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Item MAD Ranking-based
This class represents the implementation of the Item MAD ranking recommendation metric.
For further details, please refer to the paper
\[\mathrm {MAD}={avg}_{i, j}({MAD}(R^{(i)}, R^{(j)}))\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ItemMADranking clustering_name: ItemPopularity clustering_file: ../data/movielens_1m/i_pop.tsv
This is the implementation of the Item MAD rating metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.fairness.MAD.ItemMADrating.
ItemMADrating
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Item MAD Rating-based
This class represents the implementation of the Item MAD rating recommendation metric.
For further details, please refer to the paper
\[\mathrm {MAD}={avg}_{i, j}({MAD}(R^{(i)}, R^{(j)}))\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ItemMADrating clustering_name: ItemPopularity clustering_file: ../data/movielens_1m/i_pop.tsv
This is the implementation of the User MAD ranking metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.fairness.MAD.UserMADranking.
UserMADranking
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
User MAD Ranking-based
This class represents the implementation of the User MAD ranking recommendation metric.
For further details, please refer to the paper
\[\mathrm {MAD}={avg}_{i, j}({MAD}(R^{(i)}, R^{(j)}))\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: UserMADranking clustering_name: Happiness clustering_file: ../data/movielens_1m/u_happy.tsv
-
compute_idcg
(user: int, cutoff: int) → float[source]¶ Method to compute Ideal Discounted Cumulative Gain :param gain_map: :param cutoff: :return:
-
This is the implementation of the User MAD rating metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.fairness.MAD.UserMADrating.
UserMADrating
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
User MAD Rating-based
This class represents the implementation of the User MAD rating recommendation metric.
For further details, please refer to the paper
\[\mathrm {MAD}={avg}_{i, j}({MAD}(R^{(i)}, R^{(j)}))\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: UserMADrating clustering_name: Happiness clustering_file: ../data/movielens_1m/u_happy.tsv
This is the Precision metric module.
This module contains and expose the recommendation metric.
This is the implementation of the Ranking-based Equal Opportunity (REO) metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.fairness.reo.reo.
REO
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Ranking-based Equal Opportunity
This class represents the implementation of the Ranking-based Equal Opportunity (REO) recommendation metric.
For further details, please refer to the paper
\[\mathrm {REO}=\frac{{std}\left(P\left(R @ k \mid g=g_{1}, y=1\right) \ldots P\left(R(a) k=g_{A}, y=1\right)\right)} {{mean}\left(P\left(R @ k \mid g=g_{1}, y=1\right) \ldots P\left(R @ k \mid g=g_{A}, y=1\right)\right)}\]\(P\left(R @ k \mid g=g_{a}, y=1\right) = \frac{\sum_{u=1}^{N} \sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right) Y\left(u, R_{u, i}\right)} {\sum_{u=1}^{N} \sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i) Y(u, i)}\)
\(Y\left(u, R_{u, i}\right)\) identifies the ground-truth label of a user-item pair left(u, R_{u, i}right), if item R_{u, i} is liked by user 𝑢, returns 1, otherwise 0
\(\sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right) Y\left(u, R_{u, i}\right)\) counts how many items in test set from group {g_a} are ranked in top-𝑘 for user u
\(\sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i) Y(u, i)\) counts the total number of items from group {g_a} 𝑎 in test set for user u
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: REO clustering_name: ItemPopularity clustering_file: ../data/movielens_1m/i_pop.tsv
This is the implementation of the Ranking-based Statistical Parity (RSP) metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.fairness.rsp.rsp.
RSP
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Ranking-based Statistical Parity
This class represents the implementation of the Ranking-based Statistical Parity (RSP) recommendation metric.
For further details, please refer to the paper
\[\mathrm {RSP}=\frac{{std}(P(R @ k \mid g=g_{1}), \ldots, P(R @ k \mid g=g_{A}))} {{mean}(P(R @ k \mid g=g_{1}), \ldots, P(R @ k \mid g=g_{A}))}\]\(P(R @ k \mid g=g_{A})) = \frac{\sum_{u=1}^{N} \sum_{i=1}^{k} G_{g_{a}}(R_{u, i})} {\sum_{u=1}^{N} \sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i)}\)
\(\sum_{i=1}^{k} G_{g_{a}}(R_{u, i})\) calculates how many un-interacted items from group {g_a} are ranked in top-𝑘 for user u.
\(\sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i)\) calculates how many un-interacted items belong to group {g_a} for u
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: RSP clustering_name: ItemPopularity clustering_file: ../data/movielens_1m/i_pop.tsv
This is the implementation of the Expected Free Discovery metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.novelty.EFD.efd.
EFD
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Expected Free Discovery (EFD)
This class represents the implementation of the Expected Free Discovery recommendation metric.
For further details, please refer to the paper
Note
EFD can be read as the expected ICF of seen recommended items
\[\mathrm {EFD}=C \sum_{i_{k} \in R} {disc}(k) p({rel} \mid i_{k}, u)( -\log _{2} p(i \mid {seen}, \theta))\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [EFD]
This is the implementation of the Expected Free Discovery metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.novelty.EFD.extended_efd.
ExtendedEFD
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Extended EFD
This class represents the implementation of the Extended Expected Free Discovery recommendation metric.
For further details, please refer to the paper
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ExtendedEFD
This is the implementation of the Expected Popularity Complement metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.novelty.EPC.epc.
EPC
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Expected Popularity Complement (EPC)
This class represents the implementation of the Expected Popularity Complement recommendation metric.
For further details, please refer to the paper
Note
EPC can be read as the expected number of seen relevant recommended items not previously seen
\[\mathrm{EPC}=C \sum_{i_{k} \in R} \operatorname{disc}(k) p\left(r e l \mid i_{k}, u\right)\left(1-p\left(\operatorname{seen} \mid t_{k}\right)\right)\]To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [EPC]
This is the implementation of the Expected Popularity Complement metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.novelty.EPC.extended_epc.
ExtendedEPC
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Extended EPC
This class represents the implementation of the Extended EPC recommendation metric.
For further details, please refer to the paper
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ExtendedEPC
This is the implementation of the Mean Absolute Error metric. It proceeds from a system-wise computation.
-
class
elliot.evaluation.metrics.rating.mae.mae.
MAE
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Mean Absolute Error
This class represents the implementation of the Mean Absolute Error recommendation metric.
For further details, please refer to the link
\[\mathrm{MAE}=\frac{1}{|{T}|} \sum_{(u, i) \in {T}}\left|\hat{r}_{u i}-r_{u i}\right|\]\(T\) is the test set, \(\hat{r}_{u i}\) is the score predicted by the model,
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [MAE]
This is the implementation of the Mean Squared Error metric. It proceeds from a system-wise computation.
-
class
elliot.evaluation.metrics.rating.mse.mse.
MSE
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Mean Squared Error
This class represents the implementation of the Mean Squared Error recommendation metric.
For further details, please refer to the link
\[\mathrm{MSE} = \frac{1}{|{T}|} \sum_{(u, i) \in {T}}(\hat{r}_{u i}-r_{u i})^{2}\]\(T\) is the test set, \(\hat{r}_{u i}\) is the score predicted by the model
\(r_{u i}\) the actual score of the test set.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [MSE]
This is the implementation of the Root Mean Squared Error metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.rating.rmse.rmse.
RMSE
(recommendations, config, params, eval_objects)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Root Mean Squared Error
This class represents the implementation of the Root Mean Squared Error recommendation metric.
For further details, please refer to the link
\[\mathrm{RMSE} = \sqrt{\frac{1}{|{T}|} \sum_{(u, i) \in {T}}(\hat{r}_{u i}-r_{u i})^{2}}\]\(T\) is the test set, \(\hat{r}_{u i}\) is the score predicted by the model
\(r_{u i}\) the actual score of the test set.
To compute the metric, add it to the config file adopting the following pattern:
simple_metrics: [RMSE]
This is the implementation of the Precision metric. It proceeds from a user-wise computation, and average the values over the users.
-
class
elliot.evaluation.metrics.base_metric.
BaseMetric
(recommendations, config, params, evaluation_objects, additional_data=None)[source]¶ Bases:
abc.ABC
This class represents the implementation of the Precision recommendation metric. Passing ‘Precision’ to the metrics list will enable the computation of the metric.
-
class
elliot.evaluation.metrics.metrics_utils.
ProxyMetric
(name='ProxyMetric', val=0, needs_full_recommendations=False)[source]¶
This is the implementation of the Precision metric. It proceeds from a user-wise computation, and average the values over the users.
This is the metrics’ module.
This module contains and expose the recommendation metrics. Each metric is encapsulated in a specific package.
See the implementation of Precision metric for creating new per-user metrics. See the implementation of Item Coverage for creating new cross-user metrics.
elliot.evaluation.popularity_utils package¶
Module description: This module provides a popularity class based on number of users who have experienced an item (user-item repetitions in the dataset are counted once)
elliot.evaluation.relevance package¶
Module description:
-
class
elliot.evaluation.relevance.relevance.
BinaryRelevance
(test, rel_threshold)[source]¶ Bases:
elliot.evaluation.relevance.relevance.AbstractRelevanceSingleton
-
class
elliot.evaluation.relevance.relevance.
DiscountedRelevance
(test, rel_threshold)[source]¶ Bases:
elliot.evaluation.relevance.relevance.AbstractRelevanceSingleton
Module description:
Submodules¶
elliot.evaluation.evaluator module¶
Module description:
-
class
elliot.evaluation.evaluator.
Evaluator
(data: elliot.dataset.dataset.DataSet, params: types.SimpleNamespace)[source]¶ Bases:
object
elliot.evaluation.statistical_significance module¶
Module description:
Module contents¶
Module description:
elliot.hyperoptimization package¶
Submodules¶
elliot.hyperoptimization.model_coordinator module¶
Module description:
-
class
elliot.hyperoptimization.model_coordinator.
ModelCoordinator
(data_objs, base: types.SimpleNamespace, params, model_class: ClassVar)[source]¶ Bases:
object
This class handles the selection of hyperparameters for the hyperparameter tuning realized with HyperOpt.
-
objective
(args)[source]¶ This function respect the signature, and the return format required for HyperOpt optimization :param args: a Dictionary that contains the new hyper-parameter values that will be used in the current run :return: it returns a Dictionary with loss, and status being required by HyperOpt, and params, and results being required by the framework
-
single
()[source]¶ This function respect the signature, and the return format required for HyperOpt optimization :param args: a Dictionary that contains the new hyper-parameter values that will be used in the current run :return: it returns a Dictionary with loss, and status being required by HyperOpt, and params, and results being required by the framework
-
elliot.namespace package¶
Submodules¶
elliot.namespace.namespace_model module¶
Module description:
elliot.namespace.namespace_model_builder module¶
Module description:
Module contents¶
Module description:
elliot.prefiltering package¶
Submodules¶
elliot.prefiltering.standard_prefilters module¶
Module contents¶
Module description:
elliot.recommender package¶
Subpackages¶
elliot.recommender.NN package¶
Module description:
-
class
elliot.recommender.NN.attribute_item_knn.attribute_item_knn.
AttributeItemKNN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Attribute Item-kNN proposed in MyMediaLite Recommender System Library
For further details, please refer to the paper
- Parameters
neighbors – Number of item neighbors
similarity – Similarity function
To include the recommendation model, add it to the config file adopting the following pattern:
models: AttributeItemKNN: meta: save_recs: True neighbors: 40 similarity: cosine
-
property
name
¶
Module description:
-
class
elliot.recommender.NN.attribute_user_knn.attribute_user_knn.
AttributeUserKNN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Attribute User-kNN proposed in MyMediaLite Recommender System Library
For further details, please refer to the paper
- Parameters
neighbors – Number of item neighbors
similarity – Similarity function
profile – Profile type (‘binary’, ‘tfidf’)
To include the recommendation model, add it to the config file adopting the following pattern:
models: AttributeUserKNN: meta: save_recs: True neighbors: 40 similarity: cosine profile: binary
-
property
name
¶
Created on 23/10/17 @author: Maurizio Ferrari Dacrema
-
class
elliot.recommender.NN.item_knn.aiolli_ferrari.
AiolliSimilarity
(data, maxk=40, shrink=100, similarity='cosine', normalize=True)[source]¶ Bases:
object
-
class
elliot.recommender.NN.item_knn.aiolli_ferrari.
Compute_Similarity
(dataMatrix, topK=100, shrink=0, normalize=True, asymmetric_alpha=0.5, tversky_alpha=1.0, tversky_beta=1.0, similarity='cosine', row_weights=None)[source]¶ Bases:
object
-
applyAdjustedCosine
()[source]¶ Remove from every data point the average for the corresponding row :return:
-
applyPearsonCorrelation
()[source]¶ Remove from every data point the average for the corresponding column :return:
-
-
elliot.recommender.NN.item_knn.aiolli_ferrari.
check_matrix
(X, format='csc', dtype=<class 'numpy.float32'>)[source]¶ This function takes a matrix as input and transforms it into the specified format. The matrix in input can be either sparse or ndarray. If the matrix in input has already the desired format, it is returned as-is the dtype parameter is always applied and the default is np.float32 :param X: :param format: :param dtype: :return:
Module description:
-
class
elliot.recommender.NN.item_knn.item_knn.
ItemKNN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Amazon.com recommendations: item-to-item collaborative filtering
For further details, please refer to the paper
- Parameters
neighbors – Number of item neighbors
similarity – Similarity function
implementation – Implementation type (‘aiolli’, ‘classical’)
To include the recommendation model, add it to the config file adopting the following pattern:
models: ItemKNN: meta: save_recs: True neighbors: 40 similarity: cosine implementation: aiolli
-
property
name
¶
Created on 23/10/17 @author: Maurizio Ferrari Dacrema
-
class
elliot.recommender.NN.user_knn.aiolli_ferrari.
AiolliSimilarity
(data, maxk=40, shrink=100, similarity='cosine', normalize=True)[source]¶ Bases:
object
-
class
elliot.recommender.NN.user_knn.aiolli_ferrari.
Compute_Similarity
(dataMatrix, topK=100, shrink=0, normalize=True, asymmetric_alpha=0.5, tversky_alpha=1.0, tversky_beta=1.0, similarity='cosine', row_weights=None)[source]¶ Bases:
object
-
applyAdjustedCosine
()[source]¶ Remove from every data point the average for the corresponding row :return:
-
applyPearsonCorrelation
()[source]¶ Remove from every data point the average for the corresponding column :return:
-
-
elliot.recommender.NN.user_knn.aiolli_ferrari.
check_matrix
(X, format='csc', dtype=<class 'numpy.float32'>)[source]¶ This function takes a matrix as input and transforms it into the specified format. The matrix in input can be either sparse or ndarray. If the matrix in input has already the desired format, it is returned as-is the dtype parameter is always applied and the default is np.float32 :param X: :param format: :param dtype: :return:
Module description:
-
class
elliot.recommender.NN.user_knn.user_knn.
UserKNN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
GroupLens: An Open Architecture for Collaborative Filtering of Netnews
For further details, please refer to the paper
- Parameters
neighbors – Number of item neighbors
similarity – Similarity function
implementation – Implementation type (‘aiolli’, ‘classical’)
To include the recommendation model, add it to the config file adopting the following pattern:
models: UserKNN: meta: save_recs: True neighbors: 40 similarity: cosine implementation: aiolli
-
property
name
¶
elliot.recommender.adversarial package¶
Module description:
-
class
elliot.recommender.adversarial.AMF.AMF.
AMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Adversarial Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of latent factor
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
eps – Perturbation Budget
l_adv – Adversarial regularization coefficient
adversarial_epochs – Adversarial epochs
To include the recommendation model, add it to the config file adopting the following pattern:
models: AMF: meta: save_recs: True epochs: 10 factors: 200 lr: 0.001 l_w: 0.1 l_b: 0.001 eps: 0.1 l_adv: 0.001 adversarial_epochs: 10
-
property
name
¶
Module description:
-
class
elliot.recommender.adversarial.AMF.AMF_model.
AMF_model
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
call
(inputs, training=None)[source]¶ Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Parameters
inputs – A tensor or list of tensors.
training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.
mask – A mask or list of masks. A mask can be either a tensor or None (no mask).
- Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
predict
(start, stop, **kwargs)[source]¶ Generates output predictions for the input samples.
Computation is done in batches. This method is designed for performance in large scale inputs. For small amount of inputs that fit in one batch, directly using __call__ is recommended for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behaves differently during inference. Also, note the fact that test loss is not affected by regularization layers like noise and dropout.
- Parameters
x –
Input samples. It could be: - A Numpy array (or array-like), or a list of arrays
(in case the model has multiple inputs).
A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).
A tf.data dataset.
A generator or keras.utils.Sequence instance.
A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.
batch_size – Integer or None. Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).
verbose – Verbosity mode, 0 or 1.
steps – Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict will run until the input dataset is exhausted.
callbacks – List of keras.callbacks.Callback instances. List of callbacks to apply during prediction. See [callbacks](/api_docs/python/tf/keras/callbacks).
max_queue_size – Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
workers – Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread.
use_multiprocessing – Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.
See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.
- Returns
Numpy array(s) of predictions.
- Raises
RuntimeError – If model.predict is wrapped in tf.function.
ValueError – In case of mismatch between the provided input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.
-
train_step
(batch, user_adv_train=False)[source]¶ The logic for one training step.
This method can be overridden to support custom training logic. This method is called by Model.make_train_function.
This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
- Parameters
data – A nested structure of `Tensor`s.
- Returns
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
-
Module description:
Module description:
-
class
elliot.recommender.adversarial.AMR.AMR.
AMR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Adversarial Multimedia Recommender
For further details, please refer to the paper
- Parameters
factors – Number of latent factor
factors_d – Image-feature dimensionality
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
l_e – Regularization coefficient of image matrix embedding
eps – Perturbation Budget
l_adv – Adversarial regularization coefficient
adversarial_epochs – Adversarial epochs
To include the recommendation model, add it to the config file adopting the following pattern:
models: AMR: meta: save_recs: True epochs: 10 factors: 200 factors_d: 20 lr: 0.001 l_w: 0.1 l_b: 0.001 l_e: 0.1 eps: 0.1 l_adv: 0.001 adversarial_epochs: 5
-
property
name
¶
Module description:
-
class
elliot.recommender.adversarial.AMR.AMR_model.
AMR_model
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
call
(inputs, training=None, mask=None)[source]¶ Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Parameters
inputs – A tensor or list of tensors.
training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.
mask – A mask or list of masks. A mask can be either a tensor or None (no mask).
- Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
predict
(start, stop)[source]¶ Generates output predictions for the input samples.
Computation is done in batches. This method is designed for performance in large scale inputs. For small amount of inputs that fit in one batch, directly using __call__ is recommended for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behaves differently during inference. Also, note the fact that test loss is not affected by regularization layers like noise and dropout.
- Parameters
x –
Input samples. It could be: - A Numpy array (or array-like), or a list of arrays
(in case the model has multiple inputs).
A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).
A tf.data dataset.
A generator or keras.utils.Sequence instance.
A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.
batch_size – Integer or None. Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).
verbose – Verbosity mode, 0 or 1.
steps – Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict will run until the input dataset is exhausted.
callbacks – List of keras.callbacks.Callback instances. List of callbacks to apply during prediction. See [callbacks](/api_docs/python/tf/keras/callbacks).
max_queue_size – Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
workers – Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread.
use_multiprocessing – Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.
See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.
- Returns
Numpy array(s) of predictions.
- Raises
RuntimeError – If model.predict is wrapped in tf.function.
ValueError – In case of mismatch between the provided input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.
-
train_step
(batch, user_adv_train=False)[source]¶ The logic for one training step.
This method can be overridden to support custom training logic. This method is called by Model.make_train_function.
This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
- Parameters
data – A nested structure of `Tensor`s.
- Returns
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
-
Module description:
elliot.recommender.algebric package¶
Module description: Lemire, Daniel, and Anna Maclachlan. “Slope one predictors for online rating-based collaborative filtering.” Proceedings of the 2005 SIAM International Conference on Data Mining. Society for Industrial and Applied Mathematics
-
class
elliot.recommender.algebric.slope_one.slope_one.
SlopeOne
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Slope One Predictors for Online Rating-Based Collaborative Filtering
For further details, please refer to the paper
To include the recommendation model, add it to the config file adopting the following pattern:
models: SlopeOne: meta: save_recs: True
-
property
name
¶
-
property
Lemire, Daniel, and Anna Maclachlan. “Slope one predictors for online rating-based collaborative filtering.” Proceedings of the 2005 SIAM International Conference on Data Mining. Society for Industrial and Applied Mathematics
elliot.recommender.autoencoders package¶
Module description:
-
class
elliot.recommender.autoencoders.dae.multi_dae.
MultiDAE
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Variational Autoencoders for Collaborative Filtering
For further details, please refer to the paper
- Parameters
intermediate_dim – Number of intermediate dimension
latent_dim – Number of latent factors
reg_lambda – Regularization coefficient
lr – Learning rate
dropout_pkeep – Dropout probaility
To include the recommendation model, add it to the config file adopting the following pattern:
models: MultiDAE: meta: save_recs: True epochs: 10 intermediate_dim: 600 latent_dim: 200 reg_lambda: 0.01 lr: 0.001 dropout_pkeep: 1
-
property
name
¶
Module description:
-
class
elliot.recommender.autoencoders.dae.multi_dae_model.
Decoder
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.base_layer.Layer
Converts z, the encoded vector, back into a uaser interaction vector.
-
class
elliot.recommender.autoencoders.dae.multi_dae_model.
DenoisingAutoEncoder
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
Combines the encoder and decoder into an end-to-end model for training.
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
Module description:
Module description:
-
class
elliot.recommender.autoencoders.vae.multi_vae.
MultiVAE
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Variational Autoencoders for Collaborative Filtering
For further details, please refer to the paper
- Parameters
intermediate_dim – Number of intermediate dimension
latent_dim – Number of latent factors
reg_lambda – Regularization coefficient
lr – Learning rate
dropout_pkeep – Dropout probaility
To include the recommendation model, add it to the config file adopting the following pattern:
models: MultiVAE: meta: save_recs: True epochs: 10 intermediate_dim: 600 latent_dim: 200 reg_lambda: 0.01 lr: 0.001 dropout_pkeep: 1
-
property
name
¶
Module description:
-
class
elliot.recommender.autoencoders.vae.multi_vae_model.
Decoder
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.base_layer.Layer
Converts z, the encoded digit vector, back into a readable digit.
-
class
elliot.recommender.autoencoders.vae.multi_vae_model.
Encoder
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.base_layer.Layer
Maps MNIST digits to a triplet (z_mean, z_log_var, z).
-
class
elliot.recommender.autoencoders.vae.multi_vae_model.
Sampling
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.base_layer.Layer
Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.
-
class
elliot.recommender.autoencoders.vae.multi_vae_model.
VariationalAutoEncoder
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
Combines the encoder and decoder into an end-to-end model for training.
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
Module description:
Module description:
elliot.recommender.content_based package¶
Module description:
-
class
elliot.recommender.content_based.VSM.vector_space_model.
VSM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Vector Space Model
For further details, please refer to the paper and the paper
- Parameters
similarity – Similarity metric
user_profile –
item_profile –
To include the recommendation model, add it to the config file adopting the following pattern:
models: VSM: meta: save_recs: True similarity: cosine user_profile: binary item_profile: binary
-
property
name
¶
elliot.recommender.gan package¶
Module description:
-
class
elliot.recommender.gan.CFGAN.cfgan.
CFGAN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks
For further details, please refer to the paper
- Parameters
factors – Number of latent factor
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
l_gan – Adversarial regularization coefficient
g_epochs – Number of epochs to train the generator for each IRGAN step
d_epochs – Number of epochs to train the discriminator for each IRGAN step
s_zr – Sampling parameter of zero-reconstruction
s_pm – Sampling parameter of partial-masking
To include the recommendation model, add it to the config file adopting the following pattern:
models: CFGAN: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 l_w: 0.1 l_b: 0.001 l_gan: 0.001 g_epochs: 5 d_epochs: 1 s_zr: 0.001 s_pm: 0.001
-
property
name
¶
Module description:
-
class
elliot.recommender.gan.CFGAN.cfgan_model.
CFGAN_model
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
predict
(start, stop, **kwargs)[source]¶ Generates output predictions for the input samples.
Computation is done in batches. This method is designed for performance in large scale inputs. For small amount of inputs that fit in one batch, directly using __call__ is recommended for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behaves differently during inference. Also, note the fact that test loss is not affected by regularization layers like noise and dropout.
- Parameters
x –
Input samples. It could be: - A Numpy array (or array-like), or a list of arrays
(in case the model has multiple inputs).
A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).
A tf.data dataset.
A generator or keras.utils.Sequence instance.
A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.
batch_size – Integer or None. Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).
verbose – Verbosity mode, 0 or 1.
steps – Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict will run until the input dataset is exhausted.
callbacks – List of keras.callbacks.Callback instances. List of callbacks to apply during prediction. See [callbacks](/api_docs/python/tf/keras/callbacks).
max_queue_size – Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
workers – Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread.
use_multiprocessing – Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.
See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.
- Returns
Numpy array(s) of predictions.
- Raises
RuntimeError – If model.predict is wrapped in tf.function.
ValueError – In case of mismatch between the provided input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.
-
train_step
(batch)[source]¶ The logic for one training step.
This method can be overridden to support custom training logic. This method is called by Model.make_train_function.
This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
- Parameters
data – A nested structure of `Tensor`s.
- Returns
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
-
-
class
elliot.recommender.gan.CFGAN.cfgan_model.
Discriminator
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
train_step
(batch)[source]¶ The logic for one training step.
This method can be overridden to support custom training logic. This method is called by Model.make_train_function.
This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
- Parameters
data – A nested structure of `Tensor`s.
- Returns
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
-
-
class
elliot.recommender.gan.CFGAN.cfgan_model.
Generator
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
train_step
(batch)[source]¶ The logic for one training step.
This method can be overridden to support custom training logic. This method is called by Model.make_train_function.
This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
- Parameters
data – A nested structure of `Tensor`s.
- Returns
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
-
Module description:
-
class
elliot.recommender.gan.IRGAN.irgan.
IRGAN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models
For further details, please refer to the paper
- Parameters
factors – Number of latent factor
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
l_gan – Adversarial regularization coefficient
predict_model – Specification of the model to generate the recommendation (Generator/ Discriminator)
g_epochs – Number of epochs to train the generator for each IRGAN step
d_epochs – Number of epochs to train the discriminator for each IRGAN step
g_pretrain_epochs – Number of epochs to pre-train the generator
d_pretrain_epochs – Number of epochs to pre-train the discriminator
sample_lambda – Temperature Parameters
To include the recommendation model, add it to the config file adopting the following pattern:
models: IRGAN: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 l_w: 0.1 l_b: 0.001 l_gan: 0.001 predict_model: generator g_epochs: 5 d_epochs: 1 g_pretrain_epochs: 10 d_pretrain_epochs: 10 sample_lambda: 0.2
-
property
name
¶
Module description:
-
class
elliot.recommender.gan.IRGAN.irgan_model.
Discriminator
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
call
(inputs, training=None, mask=None)[source]¶ Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Parameters
inputs – A tensor or list of tensors.
training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.
mask – A mask or list of masks. A mask can be either a tensor or None (no mask).
- Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
train_step
(batch)[source]¶ The logic for one training step.
This method can be overridden to support custom training logic. This method is called by Model.make_train_function.
This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
- Parameters
data – A nested structure of `Tensor`s.
- Returns
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
-
-
class
elliot.recommender.gan.IRGAN.irgan_model.
Generator
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
call
(inputs, training=None, mask=None)[source]¶ Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Parameters
inputs – A tensor or list of tensors.
training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.
mask – A mask or list of masks. A mask can be either a tensor or None (no mask).
- Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
train_step
(batch)[source]¶ The logic for one training step.
This method can be overridden to support custom training logic. This method is called by Model.make_train_function.
This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
- Parameters
data – A nested structure of `Tensor`s.
- Returns
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
-
-
class
elliot.recommender.gan.IRGAN.irgan_model.
IRGAN_model
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
call
(inputs, training=None)[source]¶ Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Parameters
inputs – A tensor or list of tensors.
training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.
mask – A mask or list of masks. A mask can be either a tensor or None (no mask).
- Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
predict
(start, stop, **kwargs)[source]¶ Generates output predictions for the input samples.
Computation is done in batches. This method is designed for performance in large scale inputs. For small amount of inputs that fit in one batch, directly using __call__ is recommended for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behaves differently during inference. Also, note the fact that test loss is not affected by regularization layers like noise and dropout.
- Parameters
x –
Input samples. It could be: - A Numpy array (or array-like), or a list of arrays
(in case the model has multiple inputs).
A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).
A tf.data dataset.
A generator or keras.utils.Sequence instance.
A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.
batch_size – Integer or None. Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).
verbose – Verbosity mode, 0 or 1.
steps – Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict will run until the input dataset is exhausted.
callbacks – List of keras.callbacks.Callback instances. List of callbacks to apply during prediction. See [callbacks](/api_docs/python/tf/keras/callbacks).
max_queue_size – Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
workers – Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread.
use_multiprocessing – Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.
See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.
- Returns
Numpy array(s) of predictions.
- Raises
RuntimeError – If model.predict is wrapped in tf.function.
ValueError – In case of mismatch between the provided input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.
-
train_step
()[source]¶ The logic for one training step.
This method can be overridden to support custom training logic. This method is called by Model.make_train_function.
This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
- Parameters
data – A nested structure of `Tensor`s.
- Returns
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
-
elliot.recommender.graph_based package¶
Module description:
-
class
elliot.recommender.graph_based.lightgcn.LightGCN.
LightGCN
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
batch_size – Batch size
l_w – Regularization coefficient
n_layers – Number of embedding propagation layers
n_fold – Number of folds to split the adjacency matrix into sub-matrices and ease the computation
To include the recommendation model, add it to the config file adopting the following pattern:
models: LightGCN: meta: save_recs: True lr: 0.0005 epochs: 50 factors: 64 batch_size: 256 l_w: 0.1 n_layers: 1 n_fold: 5
-
property
name
¶
Module description:
-
class
elliot.recommender.graph_based.lightgcn.LightGCN_model.
LightGCNModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
call
(inputs, **kwargs)[source]¶ Generates prediction for passed users and items indices
- Parameters
inputs – user, item (batch)
Network in training mode or inference mode. (the) –
- Returns
prediction and extracted model parameters
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
Module description:
-
class
elliot.recommender.graph_based.ngcf.NGCF.
NGCF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Neural Graph Collaborative Filtering
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
batch_size – Batch size
l_w – Regularization coefficient
weight_size – Tuple with number of units for each embedding propagation layer
node_dropout – Tuple with dropout rate for each node
message_dropout – Tuple with dropout rate for each embedding propagation layer
n_fold – Number of folds to split the adjacency matrix into sub-matrices and ease the computation
To include the recommendation model, add it to the config file adopting the following pattern:
models: NGCF: meta: save_recs: True lr: 0.0005 epochs: 50 factors: 64 batch_size: 256 l_w: 0.1 weight_size: (64,) node_dropout: () message_dropout: (0.1,) n_fold: 5
-
property
name
¶
Module description:
-
class
elliot.recommender.graph_based.ngcf.NGCF_model.
NGCFModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
call
(inputs, **kwargs)[source]¶ Generates prediction for passed users and items indices
- Parameters
inputs – user, item (batch)
Network in training mode or inference mode. (the) –
- Returns
prediction and extracted model parameters
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
elliot.recommender.knowledge_aware package¶
-
class
elliot.recommender.knowledge_aware.kaHFM.ka_hfm.
KaHFM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Knowledge-aware Hybrid Factorization Machines
Vito Walter Anelli and Tommaso Di Noia and Eugenio Di Sciascio and Azzurra Ragone and Joseph Trotta “How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs”, ISWC 2019 Best student Research Paper For further details, please refer to the paper
Vito Walter Anelli and Tommaso Di Noia and Eugenio Di Sciascio and Azzurra Ragone and Joseph Trotta “Semantic Interpretation of Top-N Recommendations”, IEEE TKDE 2020 For further details, please refer to the paper
- Parameters
lr – learning rate (default: 0.05)
bias_regularization – Bias regularization (default: 0)
user_regularization – User regularization (default: 0.0025)
positive_item_regularization – regularization for positive (experienced) items (default: 0.0025)
negative_item_regularization – regularization for unknown items (default: 0.00025)
update_negative_item_factors – Boolean to update negative item factors (default: True)
update_users – Boolean to update user factors (default: True)
update_items – Boolean to update item factors (default: True)
update_bias – Boolean to update bias value (default: True)
To include the recommendation model, add it to the config file adopting the following pattern:
models: KaHFM: meta: hyper_max_evals: 20 hyper_opt_alg: tpe validation_rate: 1 verbose: True save_weights: True save_recs: True validation_metric: nDCG@10 epochs: 100 batch_size: -1 lr: 0.05 bias_regularization: 0 user_regularization: 0.0025 positive_item_regularization: 0.0025 negative_item_regularization: 0.00025 update_negative_item_factors: True update_users: True update_items: True update_bias: True
-
property
name
¶
-
class
elliot.recommender.knowledge_aware.kaHFM.ka_hfm.
MF
(ratings: Dict, map: Dict, tfidf: Dict, user_profiles: Dict, random: Any, *args)[source]¶ Bases:
object
Simple Matrix Factorization class
-
initialize
(loc: float = 0, scale: float = 0.1)[source]¶ This function initialize the data model :param loc: :param scale: :return:
-
property
name
¶
-
Module description:
-
class
elliot.recommender.knowledge_aware.kaHFM_batch.kahfm_batch.
KaHFMBatch
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Knowledge-aware Hybrid Factorization Machines (Tensorflow Batch Variant)
Vito Walter Anelli and Tommaso Di Noia and Eugenio Di Sciascio and Azzurra Ragone and Joseph Trotta “How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs”, ISWC 2019 Best student Research Paper For further details, please refer to the paper
Vito Walter Anelli and Tommaso Di Noia and Eugenio Di Sciascio and Azzurra Ragone and Joseph Trotta “Semantic Interpretation of Top-N Recommendations”, IEEE TKDE 2020 For further details, please refer to the paper
- Parameters
lr – learning rate (default: 0.0001)
l_w – Weight regularization (default: 0.005)
l_b – Bias regularization (default: 0)
To include the recommendation model, add it to the config file adopting the following pattern:
models: KaHFMBatch: meta: hyper_max_evals: 20 hyper_opt_alg: tpe validation_rate: 1 verbose: True save_weights: True save_recs: True validation_metric: nDCG@10 epochs: 100 batch_size: -1 lr: 0.0001 l_w: 0.005 l_b: 0
-
property
name
¶
Module description:
-
class
elliot.recommender.knowledge_aware.kaHFM_batch.kahfm_batch_model.
KaHFM_model
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
Module description:
-
class
elliot.recommender.knowledge_aware.kahfm_embeddings.kahfm_embeddings.
KaHFMEmbeddings
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Knowledge-aware Hybrid Factorization Machines (Tensorflow Embedding-based Variant)
Vito Walter Anelli and Tommaso Di Noia and Eugenio Di Sciascio and Azzurra Ragone and Joseph Trotta “How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs”, ISWC 2019 Best student Research Paper For further details, please refer to the paper
Vito Walter Anelli and Tommaso Di Noia and Eugenio Di Sciascio and Azzurra Ragone and Joseph Trotta “Semantic Interpretation of Top-N Recommendations”, IEEE TKDE 2020 For further details, please refer to the paper
- Parameters
lr – learning rate (default: 0.0001)
l_w – Weight regularization (default: 0.005)
l_b – Bias regularization (default: 0)
To include the recommendation model, add it to the config file adopting the following pattern:
models: KaHFMEmbeddings: meta: hyper_max_evals: 20 hyper_opt_alg: tpe validation_rate: 1 verbose: True save_weights: True save_recs: True validation_metric: nDCG@10 epochs: 100 batch_size: -1 lr: 0.0001 l_w: 0.005 l_b: 0
-
property
name
¶
Module description:
-
class
elliot.recommender.knowledge_aware.kahfm_embeddings.kahfm_embeddings_model.
KaHFMEmbeddingsModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
elliot.recommender.latent_factor_models package¶
Module description:
-
class
elliot.recommender.latent_factor_models.BPRMF.BPRMF.
BPRMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Bayesian Personalized Ranking with Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
bias_regularization – Regularization coefficient for the bias
user_regularization – Regularization coefficient for user latent factors
positive_item_regularization – Regularization coefficient for positive item latent factors
negative_item_regularization – Regularization coefficient for negative item latent factors
update_negative_item_factors –
update_users –
update_items –
update_bias –
To include the recommendation model, add it to the config file adopting the following pattern:
models: BPRMF: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 bias_regularization: 0 user_regularization: 0.0025 positive_item_regularization: 0.0025 negative_item_regularization: 0.0025 update_negative_item_factors: True update_users: True update_items: True update_bias: True
-
property
name
¶
Module description:
Module description:
-
class
elliot.recommender.latent_factor_models.BPRMF_batch.BPRMF_batch.
BPRMF_batch
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Batch Bayesian Personalized Ranking with Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
l_w – Regularization coefficient for latent factors
l_b – Regularization coefficient for bias
To include the recommendation model, add it to the config file adopting the following pattern:
models: BPRMF_batch: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 l_w: 0.1 l_b: 0.001
-
property
name
¶
Module description:
-
class
elliot.recommender.latent_factor_models.BPRMF_batch.BPRMF_batch_model.
BPRMF_batch_model
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
Module description:
Module description:
-
class
elliot.recommender.latent_factor_models.BPRSlim.bprslim.
BPRSlim
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
BPR Sparse Linear Methods
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
lj_reg – Regularization coefficient for positive items
li_reg – Regularization coefficient for negative items
To include the recommendation model, add it to the config file adopting the following pattern:
models: AMF: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 lj_reg: 0.001 li_reg: 0.1
-
property
name
¶
Module description:
Module description:
Module description:
-
class
elliot.recommender.latent_factor_models.CML.CML.
CML
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Collaborative Metric Learning
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
l_w – Regularization coefficient for latent factors
l_b – Regularization coefficient for bias
margin – Safety margin size
To include the recommendation model, add it to the config file adopting the following pattern:
models: CML: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 l_w: 0.001 l_b: 0.001 margin: 0.5
-
property
name
¶
Module description:
-
class
elliot.recommender.latent_factor_models.CML.CML_model.
CML_model
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
Module description:
Module description:
-
class
elliot.recommender.latent_factor_models.FFM.field_aware_factorization_machine.
FFM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Field-aware Factorization Machines
For further details, please refer to the paper
- Parameters
factors – Number of factors of feature embeddings
lr – Learning rate
reg – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: FFM: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 reg: 0.1
-
property
name
¶
Module description:
-
class
elliot.recommender.latent_factor_models.FFM.field_aware_factorization_machine_model.
FieldAwareFactorizationMachineModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
Module description:
-
class
elliot.recommender.latent_factor_models.FISM.FISM.
FISM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
FISM: Factored Item Similarity Models
For further details, please refer to the paper
- Parameters
factors – Number of factors of feature embeddings
lr – Learning rate
l_w – Regularization coefficient for latent factors
l_b – Regularization coefficient for bias
alpha – Alpha parameter (a value between 0 and 1)
neg_ratio –
To include the recommendation model, add it to the config file adopting the following pattern:
models: FISM: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 l_w: 0.001 l_b: 0.001 alpha: 0.5 neg_ratio:
-
property
name
¶
Module description:
-
class
elliot.recommender.latent_factor_models.FISM.FISM_model.
FISM_model
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
Module description:
Module description:
-
class
elliot.recommender.latent_factor_models.FM.factorization_machine.
FM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Factorization Machines
For further details, please refer to the paper
- Parameters
factors – Number of factors of feature embeddings
lr – Learning rate
reg – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: FM: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 reg: 0.1
-
property
name
¶
Module description:
-
class
elliot.recommender.latent_factor_models.FM.factorization_machine_model.
FactorizationMachineModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
Module description:
-
class
elliot.recommender.latent_factor_models.FunkSVD.funk_svd.
FunkSVD
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
For further details, please refer to the paper
- Parameters
factors – Number of factors of feature embeddings
lr – Learning rate
reg_w – Regularization coefficient for latent factors
reg_b – Regularization coefficient for bias
To include the recommendation model, add it to the config file adopting the following pattern:
models: FunkSVD: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 reg_w: 0.1 reg_b: 0.001
-
property
name
¶
Module description:
-
class
elliot.recommender.latent_factor_models.FunkSVD.funk_svd_model.
FunkSVDModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
Module description:
-
class
elliot.recommender.latent_factor_models.LogisticMF.logistic_matrix_factorization.
LogisticMatrixFactorization
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Logistic Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of factors of feature embeddings
lr – Learning rate
reg – Regularization coefficient
alpha – Parameter for confidence estimation
To include the recommendation model, add it to the config file adopting the following pattern:
models: LogisticMatrixFactorization: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 reg: 0.1 alpha: 0.5
-
property
name
¶
Module description:
Module description:
-
class
elliot.recommender.latent_factor_models.MF.matrix_factorization.
MF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
reg – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: MF: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 reg: 0.1
-
property
name
¶
Module description:
-
class
elliot.recommender.latent_factor_models.MF.matrix_factorization_model.
MatrixFactorizationModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
Module description:
-
class
elliot.recommender.latent_factor_models.NonNegMF.non_negative_matrix_factorization.
NonNegMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Non-Negative Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
reg – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: NonNegMF: meta: save_recs: True epochs: 10 factors: 10 lr: 0.001 reg: 0.1
-
property
name
¶
Module description:
Module description:
Module description:
Mnih, Andriy, and Russ R. Salakhutdinov. “Probabilistic matrix factorization.” Advances in neural information processing systems 20 (2007)
-
class
elliot.recommender.latent_factor_models.PMF.probabilistic_matrix_factorization.
PMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Probabilistic Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
reg – Regularization coefficient
gaussian_variance – Variance of the Gaussian distribution
To include the recommendation model, add it to the config file adopting the following pattern:
models: PMF: meta: save_recs: True epochs: 10 factors: 50 lr: 0.001 reg: 0.0025 gaussian_variance: 0.1
-
property
name
¶
Module description:
Mnih, Andriy, and Russ R. Salakhutdinov. “Probabilistic matrix factorization.” Advances in neural information processing systems 20 (2007)
-
class
elliot.recommender.latent_factor_models.PMF.probabilistic_matrix_factorization_model.
ProbabilisticMatrixFactorizationModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
Module description:
-
class
elliot.recommender.latent_factor_models.PureSVD.pure_svd.
PureSVD
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
seed – Random seed
To include the recommendation model, add it to the config file adopting the following pattern:
models: PureSVD: meta: save_recs: True epochs: 10 factors: 10 seed: 42
-
property
name
¶
Module description:
Module description:
-
class
elliot.recommender.latent_factor_models.SVDpp.svdpp.
SVDpp
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
SVD++
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
reg_w – Regularization coefficient for latent factors
reg_b – Regularization coefficient for bias
To include the recommendation model, add it to the config file adopting the following pattern:
models: SVDpp: meta: save_recs: True epochs: 10 factors: 50 lr: 0.001 reg_w: 0.1 reg_b: 0.001
-
property
name
¶
Module description:
-
class
elliot.recommender.latent_factor_models.SVDpp.svdpp_model.
SVDppModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
Module description:
-
class
elliot.recommender.latent_factor_models.Slim.slim.
Slim
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Sparse Linear Methods
For further details, please refer to the paper
- Parameters
l1_ratio –
alpha –
To include the recommendation model, add it to the config file adopting the following pattern:
models: Slim: meta: save_recs: True epochs: 10 l1_ratio: 0.001 alpha: 0.001
-
property
name
¶
Module description:
Module description:
Module description:
-
class
elliot.recommender.latent_factor_models.WRMF.wrmf.
WRMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Weighted XXX Matrix Factorization
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
lr – Learning rate
alpha –
reg – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: WRMF: meta: save_recs: True epochs: 10 factors: 50 alpha: 1 reg: 0.1
-
property
name
¶
Module description:
Module description:
elliot.recommender.neural package¶
Module description:
-
class
elliot.recommender.neural.ConvMF.convolutional_matrix_factorization.
ConvMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Convolutional Matrix Factorization for Document Context-Aware Recommendation
For further details, please refer to the paper
- Parameters
embedding_size – Embedding dimension
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
cnn_channels – List of channels
cnn_kernels – List of kernels
cnn_strides – List of strides
dropout_prob – Dropout probability applied on the convolutional layers
To include the recommendation model, add it to the config file adopting the following pattern:
models: ConvMF: meta: save_recs: True epochs: 10 embedding_size: 100 lr: 0.001 l_w: 0.005 l_b: 0.0005 cnn_channels: (1, 32, 32) cnn_kernels: (2,2) cnn_strides: (2,2) dropout_prob: 0
-
property
name
¶
Module description:
-
class
elliot.recommender.neural.ConvMF.convolutional_matrix_factorization_model.
ConvMatrixFactorizationModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
call
(inputs, training=False, **kwargs)[source]¶ Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Parameters
inputs – A tensor or list of tensors.
training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.
mask – A mask or list of masks. A mask can be either a tensor or None (no mask).
- Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
predict
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
train_step
(batch)[source]¶ The logic for one training step.
This method can be overridden to support custom training logic. This method is called by Model.make_train_function.
This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
- Parameters
data – A nested structure of `Tensor`s.
- Returns
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
-
-
class
elliot.recommender.neural.ConvMF.convolutional_matrix_factorization_model.
ConvolutionalComponent
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
call
(inputs, **kwargs)[source]¶ Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Parameters
inputs – A tensor or list of tensors.
training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.
mask – A mask or list of masks. A mask can be either a tensor or None (no mask).
- Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
-
class
elliot.recommender.neural.ConvMF.convolutional_matrix_factorization_model.
MLPComponent
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
call
(inputs, training=False, **kwargs)[source]¶ Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Parameters
inputs – A tensor or list of tensors.
training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.
mask – A mask or list of masks. A mask can be either a tensor or None (no mask).
- Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
Module description:
-
class
elliot.recommender.neural.ConvNeuMF.convolutional_neural_matrix_factorization.
ConvNeuMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Outer Product-based Neural Collaborative Filtering
For further details, please refer to the paper
- Parameters
embedding_size – Embedding dimension
lr – Learning rate
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
cnn_channels – List of channels
cnn_kernels – List of kernels
cnn_strides – List of strides
dropout_prob – Dropout probability applied on the convolutional layers
To include the recommendation model, add it to the config file adopting the following pattern:
models: ConvNeuMF: meta: save_recs: True epochs: 10 embedding_size: 100 lr: 0.001 l_w: 0.005 l_b: 0.0005 cnn_channels: (1, 32, 32) cnn_kernels: (2,2) cnn_strides: (2,2) dropout_prob: 0
-
property
name
¶
Module description:
-
class
elliot.recommender.neural.ConvNeuMF.convolutional_neural_matrix_factorization_model.
ConvNeuralMatrixFactorizationModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
Module description:
-
class
elliot.recommender.neural.DMF.deep_matrix_factorization.
DMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Deep Matrix Factorization Models for Recommender Systems.
For further details, please refer to the paper
- Parameters
lr – Learning rate
reg – Regularization coefficient
user_mlp – List of units for each layer
item_mlp – List of activation functions
similarity – Number of factors dimension
To include the recommendation model, add it to the config file adopting the following pattern:
models: DMF: meta: save_recs: True epochs: 10 lr: 0.0001 reg: 0.001 user_mlp: (64,32) item_mlp: (64,32) similarity: cosine
-
property
name
¶
Module description:
-
class
elliot.recommender.neural.DMF.deep_matrix_factorization_model.
DeepMatrixFactorizationModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
Module description:
-
class
elliot.recommender.neural.DeepFM.deep_fm.
DeepFM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
For further details, please refer to the paper
- Parameters
factors – Number of factors dimension
lr – Learning rate
l_w – Regularization coefficient
hidden_neurons – List of units for each layer
hidden_activations – List of activation functions
To include the recommendation model, add it to the config file adopting the following pattern:
models: DeepFM: meta: save_recs: True epochs: 10 factors: 100 lr: 0.001 l_w: 0.0001 hidden_neurons: (64,32) hidden_activations: ('relu','relu')
-
property
name
¶
Module description:
-
class
elliot.recommender.neural.DeepFM.deep_fm_model.
DeepFMModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
Module description:
-
class
elliot.recommender.neural.GeneralizedMF.generalized_matrix_factorization.
GMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Neural Collaborative Filtering
For further details, please refer to the paper
- Parameters
mf_factors – Number of latent factors
lr – Learning rate
is_edge_weight_train – Whether the training uses edge weighting
To include the recommendation model, add it to the config file adopting the following pattern:
models: GMF: meta: save_recs: True epochs: 10 mf_factors: 10 lr: 0.001 is_edge_weight_train: True
-
property
name
¶
Module description:
-
class
elliot.recommender.neural.GeneralizedMF.generalized_matrix_factorization_model.
GeneralizedMatrixFactorizationModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
Module description:
-
class
elliot.recommender.neural.ItemAutoRec.itemautorec.
ItemAutoRec
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
AutoRec: Autoencoders Meet Collaborative Filtering (Item-based)
For further details, please refer to the paper
- Parameters
hidden_neuron – List of units for each layer
lr – Learning rate
l_w – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: ItemAutoRec: meta: save_recs: True epochs: 10 hidden_neuron: 500 lr: 0.0001 l_w: 0.001
-
property
name
¶
Module description:
-
class
elliot.recommender.neural.ItemAutoRec.itemautorec_model.
Decoder
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.base_layer.Layer
-
class
elliot.recommender.neural.ItemAutoRec.itemautorec_model.
Encoder
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.base_layer.Layer
-
class
elliot.recommender.neural.ItemAutoRec.itemautorec_model.
ItemAutoRecModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix. [Is Inverted]
- Returns
The matrix of predicted values.
-
Module description:
Module description:
-
class
elliot.recommender.neural.NAIS.nais.
NAIS
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
NAIS: Neural Attentive Item Similarity Model for Recommendation
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
algorithm – Type of user-item factor operation (‘product’, ‘concat’)
weight_size – List of units for each layer
lr – Learning rate
l_w – Regularization coefficient
l_b – Bias regularization coefficient
alpha – Attention factor
beta – Smoothing exponent
neg_ratio – Ratio of negative sampled items, e.g., 0 = no items, 1 = all un-rated items
To include the recommendation model, add it to the config file adopting the following pattern:
models: NAIS: meta: save_recs: True factors: 100 algorithm: concat weight_size: 32 lr: 0.001 l_w: 0.001 l_b: 0.001 alpha: 0.5 beta: 0.5 neg_ratio: 0.5
-
property
name
¶
Module description:
-
class
elliot.recommender.neural.NAIS.nais_model.
LatentFactor
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.layers.embeddings.Embedding
-
class
elliot.recommender.neural.NAIS.nais_model.
NAIS_model
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
Module description:
-
class
elliot.recommender.neural.NFM.neural_fm.
NFM
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Neural Factorization Machines for Sparse Predictive Analytics
For further details, please refer to the paper
- Parameters
factors – Number of factors dimension
lr – Learning rate
l_w – Regularization coefficient
hidden_neurons – List of units for each layer
hidden_activations – List of activation functions
To include the recommendation model, add it to the config file adopting the following pattern:
models: NFM: meta: save_recs: True epochs: 10 factors: 100 lr: 0.001 l_w: 0.0001 hidden_neurons: (64,32) hidden_activations: ('relu','relu')
-
property
name
¶
Module description:
-
class
elliot.recommender.neural.NFM.neural_fm_model.
NeuralFactorizationMachineModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
Module description:
-
class
elliot.recommender.neural.NPR.neural_personalized_ranking.
NPR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Neural Personalized Ranking for Image Recommendation (Model without visual features)
For further details, please refer to the paper
- Parameters
mf_factors – Number of MF latent factors
mlp_hidden_size – List of units for each layer
lr – Learning rate
l_w – Regularization coefficient
dropout – Dropout rate
To include the recommendation model, add it to the config file adopting the following pattern:
models: NPR: meta: save_recs: True epochs: 10 mf_factors: 100 mlp_hidden_size: (64,32) lr: 0.001 l_w: 0.001 dropout: 0.45
-
property
name
¶
Module description:
-
class
elliot.recommender.neural.NPR.neural_personalized_ranking_model.
NPRModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
predict
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
train_step
(batch)[source]¶ The logic for one training step.
This method can be overridden to support custom training logic. This method is called by Model.make_train_function.
This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
- Parameters
data – A nested structure of `Tensor`s.
- Returns
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
-
Module description:
-
class
elliot.recommender.neural.NeuMF.neural_matrix_factorization.
NeuMF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Neural Collaborative Filtering
For further details, please refer to the paper
- Parameters
mf_factors – Number of MF latent factors
mlp_factors – Number of MLP latent factors
mlp_hidden_size – List of units for each layer
lr – Learning rate
dropout – Dropout rate
is_mf_train – Whether to train the MF embeddings
is_mlp_train – Whether to train the MLP layers
To include the recommendation model, add it to the config file adopting the following pattern:
models: NeuMF: meta: save_recs: True epochs: 10 mf_factors: 10 mlp_factors: 10 mlp_hidden_size: (64,32) lr: 0.001 dropout: 0.0 is_mf_train: True is_mlp_train: True
-
property
name
¶
Module description:
-
class
elliot.recommender.neural.NeuMF.neural_matrix_factorization_model.
NeuralMatrixFactorizationModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
Module description:
-
class
elliot.recommender.neural.UserAutoRec.userautorec.
UserAutoRec
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
AutoRec: Autoencoders Meet Collaborative Filtering (User-based)
For further details, please refer to the paper
- Parameters
hidden_neuron – List of units for each layer
lr – Learning rate
l_w – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: UserAutoRec: meta: save_recs: True epochs: 10 hidden_neuron: 500 lr: 0.0001 l_w: 0.001
-
property
name
¶
Module description:
-
class
elliot.recommender.neural.UserAutoRec.userautorec_model.
Decoder
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.base_layer.Layer
-
class
elliot.recommender.neural.UserAutoRec.userautorec_model.
Encoder
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.base_layer.Layer
-
class
elliot.recommender.neural.UserAutoRec.userautorec_model.
UserAutoRecModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
Module description:
Module description:
-
class
elliot.recommender.neural.WideAndDeep.wide_and_deep.
WideAndDeep
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Wide & Deep Learning for Recommender Systems
(For now, available with knowledge-aware features)
For further details, please refer to the paper
- Parameters
factors – Number of latent factors
mlp_hidden_size – List of units for each layer
lr – Learning rate
l_w – Regularization coefficient
l_b – Bias Regularization Coefficient
dropout_prob – Dropout rate
To include the recommendation model, add it to the config file adopting the following pattern:
models: NPR: meta: save_recs: True epochs: 10 factors: 50 mlp_hidden_size: (32, 32, 1) lr: 0.001 l_w: 0.005 l_b: 0.0005 dropout_prob: 0.0
-
property
name
¶
Module description:
-
class
elliot.recommender.neural.WideAndDeep.wide_and_deep_model.
WideAndDeepModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
call
(inputs, training=False, **kwargs)[source]¶ Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Parameters
inputs – A tensor or list of tensors.
training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.
mask – A mask or list of masks. A mask can be either a tensor or None (no mask).
- Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
predict
(user, **kwargs)[source]¶ Generates output predictions for the input samples.
Computation is done in batches. This method is designed for performance in large scale inputs. For small amount of inputs that fit in one batch, directly using __call__ is recommended for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behaves differently during inference. Also, note the fact that test loss is not affected by regularization layers like noise and dropout.
- Parameters
x –
Input samples. It could be: - A Numpy array (or array-like), or a list of arrays
(in case the model has multiple inputs).
A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).
A tf.data dataset.
A generator or keras.utils.Sequence instance.
A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.
batch_size – Integer or None. Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).
verbose – Verbosity mode, 0 or 1.
steps – Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict will run until the input dataset is exhausted.
callbacks – List of keras.callbacks.Callback instances. List of callbacks to apply during prediction. See [callbacks](/api_docs/python/tf/keras/callbacks).
max_queue_size – Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
workers – Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. If 0, will execute the generator on the main thread.
use_multiprocessing – Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.
See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.
- Returns
Numpy array(s) of predictions.
- Raises
RuntimeError – If model.predict is wrapped in tf.function.
ValueError – In case of mismatch between the provided input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.
-
train_step
(batch)[source]¶ The logic for one training step.
This method can be overridden to support custom training logic. This method is called by Model.make_train_function.
This method should contain the mathemetical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.
Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.
- Parameters
data – A nested structure of `Tensor`s.
- Returns
A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
-
elliot.recommender.unpersonalized package¶
Created on April 4, 2020 Tensorflow 2.1.0 implementation of APR. @author Anonymized
-
class
elliot.recommender.unpersonalized.most_popular.most_popular.
MostPop
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
-
property
name
¶
-
property
Created on April 4, 2020 Tensorflow 2.1.0 implementation of APR. @author Anonymized
-
class
elliot.recommender.unpersonalized.random_recommender.Random.
Random
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
-
property
name
¶
-
property
elliot.recommender.visual_recommenders package¶
Module description:
-
class
elliot.recommender.visual_recommenders.ACF.ACF.
ACF
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Attentive Collaborative Filtering: Multimedia Recommendation with Item- and Component-Level Attention
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
batch_size – Batch size
l_w – Regularization coefficient
layers_component – Tuple with number of units for each attentive layer (component-level)
layers_item – Tuple with number of units for each attentive layer (item-level)
To include the recommendation model, add it to the config file adopting the following pattern:
models: ACF: meta: save_recs: True lr: 0.0005 epochs: 50 factors: 100 batch_size: 128 l_w: 0.000025 layers_component: (64, 1) layers_item: (64, 1)
-
property
name
¶
Module description:
-
class
elliot.recommender.visual_recommenders.ACF.ACF_model.
ACF_model
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
Module description:
Module description:
-
class
elliot.recommender.visual_recommenders.DVBPR.DVBPR.
DVBPR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Visually-Aware Fashion Recommendation and Design with Generative Image Models
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
batch_size – Batch size
lambda_1 – Regularization coefficient
lambda_2 – CNN regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: DVBPR: meta: save_recs: True lr: 0.0001 epochs: 50 factors: 100 batch_size: 128 lambda_1: 0.0001 lambda_2: 1.0
-
property
name
¶
Module description:
-
class
elliot.recommender.visual_recommenders.DVBPR.FeatureExtractor.
FeatureExtractor
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
,abc.ABC
-
call
(inputs, training=None, mask=None)[source]¶ Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
- Parameters
inputs – A tensor or list of tensors.
training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.
mask – A mask or list of masks. A mask can be either a tensor or None (no mask).
- Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
-
Module description:
Module description:
-
class
elliot.recommender.visual_recommenders.DeepStyle.DeepStyle.
DeepStyle
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
DeepStyle: Learning User Preferences for Visual Recommendation
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
batch_size – Batch size
l_w – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: DeepStyle: meta: save_recs: True lr: 0.0005 epochs: 50 factors: 100 batch_size: 128 l_w: 0.000025
-
property
name
¶
Module description:
Module description:
Module description:
-
class
elliot.recommender.visual_recommenders.VBPR.VBPR.
VBPR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
factors – Number of latent factors
factors_d – Dimension of visual factors
batch_size – Batch size
l_w – Regularization coefficient
l_b – Regularization coefficient of bias
l_e – Regularization coefficient of projection matrix
To include the recommendation model, add it to the config file adopting the following pattern:
models: VBPR: meta: save_recs: True lr: 0.0005 epochs: 50 factors: 100 factors_d: 20 batch_size: 128 l_w: 0.000025 l_b: 0 l_e: 0.002
-
property
name
¶
Module description:
-
class
elliot.recommender.visual_recommenders.VBPR.VBPR_model.
VBPR_model
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).
- Returns
Python dictionary.
-
Module description:
Module description:
-
class
elliot.recommender.visual_recommenders.VNPR.visual_neural_personalized_ranking.
VNPR
(data, config, params, *args, **kwargs)[source]¶ Bases:
elliot.recommender.recommender_utils_mixin.RecMixin
,elliot.recommender.base_recommender_model.BaseRecommenderModel
Visual Neural Personalized Ranking for Image Recommendation
For further details, please refer to the paper
- Parameters
lr – Learning rate
epochs – Number of epochs
mf_factors: – Number of latent factors for Matrix Factorization:
mlp_hidden_size – Tuple with number of units for each multi-layer perceptron layer
prob_keep_dropout – Dropout rate for multi-layer perceptron
batch_size – Batch size
l_w – Regularization coefficient
To include the recommendation model, add it to the config file adopting the following pattern:
models: VNPR: meta: save_recs: True lr: 0.001 epochs: 50 mf_factors: 10 mlp_hidden_size: (32, 1) prob_keep_dropout: 0.2 batch_size: 64 l_w: 0.001
-
property
name
¶
Module description:
-
class
elliot.recommender.visual_recommenders.VNPR.visual_neural_personalized_ranking_model.
VNPRModel
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
-
get_recs
(inputs, training=False, **kwargs)[source]¶ Get full predictions on the whole users/items matrix.
- Returns
The matrix of predicted values.
-
Module description:
Submodules¶
elliot.recommender.base_recommender_model module¶
Module description:
-
class
elliot.recommender.base_recommender_model.
BaseRecommenderModel
(data, config, params, *args, **kwargs)[source]¶ Bases:
abc.ABC
-
autoset_params
()[source]¶ Define Parameters as tuples: (variable_name, public_name, shortcut, default, reading_function, printing_function) Example:
- self._params_list = [
(“_similarity”, “similarity”, “sim”, “cosine”, None, None), (“_user_profile_type”, “user_profile”, “up”, “tfidf”, None, None), (“_item_profile_type”, “item_profile”, “ip”, “tfidf”, None, None), (“_mlpunits”, “mlp_units”, “mlpunits”, “(1,2,3)”, lambda x: list(make_tuple(x)), lambda x: str(x).replace(“,”, “-“)),
]
-
elliot.recommender.recommender_utils_mixin module¶
Module contents¶
Module description:
elliot.result_handler package¶
Submodules¶
elliot.result_handler.result_handler module¶
Module description:
-
class
elliot.result_handler.result_handler.
HyperParameterStudy
(rel_threshold=1)[source]¶ Bases:
object
Module contents¶
Module description:
elliot.splitter package¶
Submodules¶
elliot.splitter.base_splitter module¶
-
class
elliot.splitter.base_splitter.
Splitter
(data: pandas.DataFrame, splitting_ns: types.SimpleNamespace)[source]¶ Bases:
object
-
generic_split_function
(data: pandas.DataFrame, **kwargs) → List[Tuple[pandas.DataFrame, pandas.DataFrame]][source]¶
-
handle_hierarchy
(data: pandas.DataFrame, valtest_splitting_ns: types.SimpleNamespace) → List[Tuple[pandas.DataFrame, pandas.DataFrame]][source]¶
-
Module contents¶
elliot.utils package¶
Submodules¶
elliot.utils.folder module¶
Module description:
elliot.utils.logger_util module¶
elliot.utils.logging module¶
elliot.utils.read module¶
Module description:
-
elliot.utils.read.
find_checkpoint
(dir, restore_epochs, epochs, rec, best=0)[source]¶ - Parameters
dir – directory of the model where we start from the reading.
restore_epochs – epoch from which we start from.
epochs – epochs from which we restore (0 means that we have best)
rec – recommender model
best – 0 No Best - 1 Search for the Best
- Returns
-
elliot.utils.read.
load_obj
(name)[source]¶ Load the pkl object by name :param name: name of file :return:
-
elliot.utils.read.
read_config
(sections_fields)[source]¶ - Parameters
sections_fields (list) – list of fields to retrieve from configuration file
- Returns
A list of configuration values.
-
elliot.utils.read.
read_csv
(filename)[source]¶ - Parameters
filename (str) – csv file path
- Returns
A pandas dataframe.
-
elliot.utils.read.
read_imagenet_classes_txt
(filename)[source]¶ - Parameters
filename (str) – txt file path
- Returns
A list with 1000 imagenet classes as strings.
elliot.utils.write module¶
Module description:
-
elliot.utils.write.
save_np
(npy, filename)[source]¶ Store numpy to memory. :param npy: numpy to save :param filename: filename :type filename: str
Module contents¶
Module description:
Submodules¶
elliot.run module¶
Module description:
Module contents¶
Module description: