I compared the results of evaluateprecisionrecall with [mlmetrics.mapk](https://github.com/benhamner/Metrics/blob/master/Python/mlmetrics/averageprecision.py "mlmetrics.mapk") and the numbers are quite different. I could be that I'm using ml_metrics wong, however the number I get a close to the leader board score I get from Kaggle.
On reading up on MAP@K and AP@K, then looking at your docs, I'm unclear if they are the same. This wikipedia article defines AP@K which isn't the same as averaging the precision over user_id for each cut off.
As I understand it, MAPK is important in ranking recommenders because it takes into account the order of the recommendation, i.e., if the correct recommendation is first, then MAPK is higher than if its 2nd etc.