Adding a MAP@K (Mean Average Precision @ K) metric to recommender evaluate

User 2568 | 5/10/2016, 2:51:00 AM

With recommender systems it's usual to provide K recommendations. MAP@K is a useful metric for this use case.

Comments

User 1207 | 5/11/2016, 6:50:18 PM

Hello Kevin,

If you call evaluate_precision_recall, it gives you a by-user and overall precision and recall. The overall is the average precision/recall at each value of K, which should be what you're looking for.

Hope that helps! -- Hoyt


User 2568 | 5/11/2016, 11:28:49 PM

@hoytak I compared the results of evaluateprecisionrecall with [mlmetrics.mapk](https://github.com/benhamner/Metrics/blob/master/Python/mlmetrics/averageprecision.py "mlmetrics.mapk") and the numbers are quite different. I could be that I'm using ml_metrics wong, however the number I get a close to the leader board score I get from Kaggle.

On reading up on MAP@K and AP@K, then looking at your docs, I'm unclear if they are the same. This wikipedia article defines AP@K which isn't the same as averaging the precision over user_id for each cut off.

As I understand it, MAPK is important in ranking recommenders because it takes into account the order of the recommendation, i.e., if the correct recommendation is first, then MAPK is higher than if its 2nd etc.


User 1207 | 5/12/2016, 6:46:10 PM

Hey @Kevin_McIsaac ,

I see what you mean. I'll look into this with the team here.

Thanks! -- Hoyt