elliot.evaluation.metrics.accuracy.map package

Submodules

elliot.evaluation.metrics.accuracy.map.map module

This is the implementation of the Mean Average Precision metric. It proceeds from a user-wise computation, and average the values over the users.

class elliot.evaluation.metrics.accuracy.map.map.MAP(recommendations, config, params, eval_objects)[source]

Bases: elliot.evaluation.metrics.base_metric.BaseMetric

Mean Average Precision

This class represents the implementation of the Mean Average Precision recommendation metric. Passing ‘MAP’ to the metrics list will enable the computation of the metric.

For further details, please refer to the link

Note

In this case the normalization factor used is \(\frac{1}{\min (m,N)}\), which prevents your AP score from being unfairly suppressed when your number of recommendations couldn’t possibly capture all the correct ones.

\[\begin{split}\begin{align*} \mathrm{AP@N} &= \frac{1}{\mathrm{min}(m,N)}\sum_{k=1}^N P(k) \cdot rel(k) \\ \mathrm{MAP@N}& = \frac{1}{|U|}\sum_{u=1}^{|U|}(\mathrm{AP@N})_u \end{align*}\end{split}\]

To compute the metric, add it to the config file adopting the following pattern:

simple_metrics: [MAP]
eval_user_metric()[source]

Evaluation function :return: the overall averaged value of Mean Average Precision per user

static name()[source]

Metric Name Getter :return: returns the public name of the metric

Module contents