precision & Recall

User 2570 | 2/2/2016, 12:07:45 PM

1.Why is the following not plotting Precision & Recall Curve:

%matplotlib inline modelperformance = graphlab.recommender.util.comparemodels(test_data, [popularitymodel,personalizedmodel], usersample=0.05) 2. Have the following results for MO and M1, precision & Recall, MO +--------+-----------------+------------------+ | cutoff | meanprecision | mean_recall | +--------+-----------------+------------------+ | 1 | 0.0330945069942 | 0.00783130588658 | | 2 | 0.0291709314227 | 0.0151057858325 | | 3 | 0.025815989992 | 0.0198920644161 | | 4 | 0.0240532241556 | 0.0244239431292 | | 5 | 0.0221084953941 | 0.0285971884744 | | 6 | 0.0213806436938 | 0.0333037228892 | | 7 | 0.0199834283765 | 0.0358899289043 | | 8 | 0.0188502217673 | 0.0385522197958 | | 9 | 0.0181583835627 | 0.0416415439445 | | 10 | 0.0172637325145 | 0.043681315841 | +--------+-----------------+------------------+

M1 +--------+-----------------+-----------------+ | cutoff | meanprecision | meanrecall | +--------+-----------------+-----------------+ | 1 | 0.194814056636 | 0.0561488101534 | | 2 | 0.16461958376 | 0.091048105672 | | 3 | 0.143068349824 | 0.119491381377 | | 4 | 0.127601501194 | 0.141945094785 | | 5 | 0.117229614466 | 0.16043214055 | | 6 | 0.108154213579 | 0.17788838687 | | 7 | 0.100209582298 | 0.192220120188 | | 8 | 0.0938672807915 | 0.205334631819 | | 9 | 0.0879108381667 | 0.215927358021 | | 10 | 0.0832821562607 | 0.225314998319 | +--------+-----------------+-----------------+

How can I interpret the results, i.e. this MO has this precision compared to M1

Thank you DATO COMMUNITY FOR HELP

Comments

User 2570 | 2/2/2016, 12:42:06 PM

problem also when saving model for later use:

model.save("my_model")

NameError Traceback (most recent call last) <ipython-input-25-157c9c528bd8> in <module>() ----> 1 model.save("my_model")

NameError: name 'model' is not defined


User 1359 | 2/8/2016, 11:55:46 PM

Hello,

  1. try running the following, after re-running the code you quoted in question 1: graphlab.show_comparison( [ popularity_model, personalized_model ], model_performance)

  2. Try the visual plot from the code above to compare the model performance. In the specific case you quoted above, model 1 has better precision and recall at each and every cutoff value than model 0.

    If the precision/recall performances varied at each cutoff, you would evaluate the performance at/near the cutoff value you intended to use with the model.

  3. It looks like you are not using the correct variable name for the models. If it is the models from question 1: popularity_model.save("popularity_model") personalized_model.save("personalized_model")

Let me know if you still have issues.
Dick


User 2570 | 2/9/2016, 4:13:12 AM

Thank you so much @Dick Kreisberg