Fairness¶
Elliot integrates the following fairness metrics.
Summary¶
Bias Disparity - Standard |
|
Bias Disparity - Bias Recommendations |
|
Bias Disparity - Bias Source |
|
Item MAD Ranking-based |
|
Item MAD Rating-based |
|
User MAD Ranking-based |
|
User MAD Rating-based |
|
|
Ranking-based Equal Opportunity |
|
Ranking-based Statistical Parity |
BiasDisparity BD¶
-
class
elliot.evaluation.metrics.fairness.BiasDisparity.BiasDisparityBD.
BiasDisparityBD
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Bias Disparity - Standard
This class represents the implementation of the Bias Disparity recommendation metric.
For further details, please refer to the paper
\[\mathrm {BD(G, C)}=\frac{B_{R}(G, C)-B_{S}(G, C)}{B_{S}(G, C)}\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: BiasDisparityBD user_clustering_name: Happiness user_clustering_file: ../data/movielens_1m/u_happy.tsv item_clustering_name: ItemPopularity item_clustering_file: ../data/movielens_1m/i_pop.tsv
BiasDisparity BR¶
-
class
elliot.evaluation.metrics.fairness.BiasDisparity.BiasDisparityBR.
BiasDisparityBR
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Bias Disparity - Bias Recommendations
This class represents the implementation of the Bias Disparity - Bias Recommendations recommendation metric.
For further details, please refer to the paper
\[\mathrm {BD(G, C)}=\frac{B_{R}(G, C)-B_{S}(G, C)}{B_{S}(G, C)}\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: BiasDisparityBR user_clustering_name: Happiness user_clustering_file: ../data/movielens_1m/u_happy.tsv item_clustering_name: ItemPopularity item_clustering_file: ../data/movielens_1m/i_pop.tsv
BiasDisparity BS¶
-
class
elliot.evaluation.metrics.fairness.BiasDisparity.BiasDisparityBS.
BiasDisparityBS
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Bias Disparity - Bias Source
This class represents the implementation of the Bias Disparity - Bias Source recommendation metric.
For further details, please refer to the paper
\[\mathrm {B_{S}(G, C)}=\frac{P R_{S}(G, C)}{P(C)}\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: BiasDisparityBS user_clustering_name: Happiness user_clustering_file: ../data/movielens_1m/u_happy.tsv item_clustering_name: ItemPopularity item_clustering_file: ../data/movielens_1m/i_pop.tsv
ItemMADranking¶
-
class
elliot.evaluation.metrics.fairness.MAD.ItemMADranking.
ItemMADranking
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Item MAD Ranking-based
This class represents the implementation of the Item MAD ranking recommendation metric.
For further details, please refer to the paper
\[\mathrm {MAD}={avg}_{i, j}({MAD}(R^{(i)}, R^{(j)}))\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ItemMADranking clustering_name: ItemPopularity clustering_file: ../data/movielens_1m/i_pop.tsv
ItemMADrating¶
-
class
elliot.evaluation.metrics.fairness.MAD.ItemMADrating.
ItemMADrating
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Item MAD Rating-based
This class represents the implementation of the Item MAD rating recommendation metric.
For further details, please refer to the paper
\[\mathrm {MAD}={avg}_{i, j}({MAD}(R^{(i)}, R^{(j)}))\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: ItemMADrating clustering_name: ItemPopularity clustering_file: ../data/movielens_1m/i_pop.tsv
UserMADranking¶
-
class
elliot.evaluation.metrics.fairness.MAD.UserMADranking.
UserMADranking
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
User MAD Ranking-based
This class represents the implementation of the User MAD ranking recommendation metric.
For further details, please refer to the paper
\[\mathrm {MAD}={avg}_{i, j}({MAD}(R^{(i)}, R^{(j)}))\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: UserMADranking clustering_name: Happiness clustering_file: ../data/movielens_1m/u_happy.tsv
UserMADrating¶
-
class
elliot.evaluation.metrics.fairness.MAD.UserMADrating.
UserMADrating
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
User MAD Rating-based
This class represents the implementation of the User MAD rating recommendation metric.
For further details, please refer to the paper
\[\mathrm {MAD}={avg}_{i, j}({MAD}(R^{(i)}, R^{(j)}))\]To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: UserMADrating clustering_name: Happiness clustering_file: ../data/movielens_1m/u_happy.tsv
REO¶
-
class
elliot.evaluation.metrics.fairness.reo.reo.
REO
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Ranking-based Equal Opportunity
This class represents the implementation of the Ranking-based Equal Opportunity (REO) recommendation metric.
For further details, please refer to the paper
\[\mathrm {REO}=\frac{{std}\left(P\left(R @ k \mid g=g_{1}, y=1\right) \ldots P\left(R(a) k=g_{A}, y=1\right)\right)} {{mean}\left(P\left(R @ k \mid g=g_{1}, y=1\right) \ldots P\left(R @ k \mid g=g_{A}, y=1\right)\right)}\]\(P\left(R @ k \mid g=g_{a}, y=1\right) = \frac{\sum_{u=1}^{N} \sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right) Y\left(u, R_{u, i}\right)} {\sum_{u=1}^{N} \sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i) Y(u, i)}\)
\(Y\left(u, R_{u, i}\right)\) identifies the ground-truth label of a user-item pair left(u, R_{u, i}right), if item R_{u, i} is liked by user 𝑢, returns 1, otherwise 0
\(\sum_{i=1}^{k} G_{g_{a}}\left(R_{u, i}\right) Y\left(u, R_{u, i}\right)\) counts how many items in test set from group {g_a} are ranked in top-𝑘 for user u
\(\sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i) Y(u, i)\) counts the total number of items from group {g_a} 𝑎 in test set for user u
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: REO clustering_name: ItemPopularity clustering_file: ../data/movielens_1m/i_pop.tsv
RSP¶
-
class
elliot.evaluation.metrics.fairness.rsp.rsp.
RSP
(recommendations, config, params, eval_objects, additional_data)[source]¶ Bases:
elliot.evaluation.metrics.base_metric.BaseMetric
Ranking-based Statistical Parity
This class represents the implementation of the Ranking-based Statistical Parity (RSP) recommendation metric.
For further details, please refer to the paper
\[\mathrm {RSP}=\frac{{std}(P(R @ k \mid g=g_{1}), \ldots, P(R @ k \mid g=g_{A}))} {{mean}(P(R @ k \mid g=g_{1}), \ldots, P(R @ k \mid g=g_{A}))}\]\(P(R @ k \mid g=g_{A})) = \frac{\sum_{u=1}^{N} \sum_{i=1}^{k} G_{g_{a}}(R_{u, i})} {\sum_{u=1}^{N} \sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i)}\)
\(\sum_{i=1}^{k} G_{g_{a}}(R_{u, i})\) calculates how many un-interacted items from group {g_a} are ranked in top-𝑘 for user u.
\(\sum_{i \in I \backslash I_{u}^{+}} G_{g_{a}}(i)\) calculates how many un-interacted items belong to group {g_a} for u
To compute the metric, add it to the config file adopting the following pattern:
complex_metrics: - metric: RSP clustering_name: ItemPopularity clustering_file: ../data/movielens_1m/i_pop.tsv