User 1095 | 12/18/2014, 1:38:40 PM
We find an issue in the evaluation of a prediction using Boosted Trees. When evaluating using the evaluate method of the own classifier we get an accuracy of "0.9864864864864865", which is false, when using the evaluation module, like "graphlab.evaluation.accuracy(pred, test)" we get an accuracy of 0.5933660933660934. The confusion matrix seems to be the same using both methos.
Using a the Neural Network as a classifier we have a similar issue, the accuracy given is 0.03500000014901161 and the evaluation.accuracy returns 0.03814713896457766, a small but still a difference.
Hope to hear from you soon