User 2574 | 6/9/2016, 6:57:13 PM
Comparing two outputs of calling
predict_topk from the same model:
a. k=5, outputtype = 'probability'
b. k=5, outputtype = 'rank'
When I get the predicted results via output b, I see a constant label (or class) at position number 4 (as ranking starts from 0). What I mean by constant is that if you have a class called 'cat', at position 4, for every instance of the test data, 'cat' appears [not expected]. Whereas for a at position 4, I am getting different labels for different instances [as expected]. From position 0-3, the outputs of a (after sorting by probability) and b match. But at rank order 4, the two results a and b do not match.
Can you please try an multi class example of your own and see if you are getting something similar when your output is of type 'rank'.