elliot.evaluation.metrics.bias.pop_reo package

Submodules

elliot.evaluation.metrics.bias.pop_reo.extended_pop_reo module

This is the implementation of the Popularity-based Ranking-based Equal Opportunity (REO) metric. It proceeds from a user-wise computation, and average the values over the users.

class elliot.evaluation.metrics.bias.pop_reo.extended_pop_reo.ExtendedPopREO(recommendations, config, params, eval_objects, additional_data)[source]

Bases: elliot.evaluation.metrics.base_metric.BaseMetric

Extended Popularity-based Ranking-based Equal Opportunity

This class represents the implementation of the Extended Popularity-based Ranking-based Equal Opportunity (REO) recommendation metric.

For further details, please refer to the paper

To compute the metric, add it to the config file adopting the following pattern:

complex_metrics:
- metric: ExtendedPopREO
eval()[source]

Evaluation function :return: the overall averaged value of PopREO

static name()[source]

Metric Name Getter :return: returns the public name of the metric

elliot.evaluation.metrics.bias.pop_reo.pop_reo module

This is the implementation of the Popularity-based Ranking-based Equal Opportunity (REO) metric. It proceeds from a user-wise computation, and average the values over the users.

class elliot.evaluation.metrics.bias.pop_reo.pop_reo.PopREO(recommendations, config, params, eval_objects)[source]

Bases: elliot.evaluation.metrics.base_metric.BaseMetric

Popularity-based Ranking-based Equal Opportunity

This class represents the implementation of the Popularity-based Ranking-based Equal Opportunity (REO) recommendation metric.

For further details, please refer to the paper

\[\mathrm {REO}=\frac{{std}\left(P\left(R @ k \mid g=g_{1}, y=1\right) \ldots P\left(R(a) k=g_{A}, y=1\right)\right)} {{mean}\left(P\left(R @ k \mid g=g_{1}, y=1\right) \ldots P\left(R @ k \mid g=g_{A}, y=1\right)\right)}\]

\(P\left(R @ k \mid g=g_{a}, y=1\right) = \frac{\sum_{u=1}^{N} \sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right) Y\left(u, R_{u, i}\right)} {\sum_{u=1}^{N} \sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i) Y(u, i)}\)

\(Y\left(u, R_{u, i}\right)\) identifies the ground-truth label of a user-item pair left(u, R_{u, i}right), if item R_{u, i} is liked by user 𝑢, returns 1, otherwise 0

\(\sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right) Y\left(u, R_{u, i}\right)\) counts how many items in test set from group {g_a} are ranked in top-𝑘 for user u

\(\sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i) Y(u, i)\) counts the total number of items from group {g_a} 𝑎 in test set for user u

To compute the metric, add it to the config file adopting the following pattern:

simple_metrics: [PopREO]
eval()[source]

Evaluation function :return: the overall averaged value of PopREO

static name()[source]

Metric Name Getter :return: returns the public name of the metric

Module contents