Arguments order for graphlab.evaluation.accuracy()

User 1319 | 9/21/2015, 7:50:18 AM

Hi, I am confused by the order of the arguments passed to graphlab.evaluation.accuracy(targets, predictions). Based on its documentations, the first argument targets is "the Ground truth class labels", and the second predictions is "the prediction that corresponds to each target value". However, this example in the user guide uses the opposite order. Which one is the correct way?

Note: I am assuming the case where I pass the predictions as class labels (not class probabilities)

def custom_evaluator(scorer, train, valid):
    yhat_train = scorer(train)
    yhat_valid = scorer(valid)
    return {'train_acc': gl.evaluation.accuracy(yhat_train, train['target']),
               'valid_acc': gl.evaluation.accuracy(yhat_valid, valid['target'])}

Update: I understand this may give the same accuracy (binary classification), but tricky for multi-class classification.

Comments

User 1178 | 9/21/2015, 6:12:39 PM

Hi,

The correct documentation is with the API documentation. So the first parameter is targets(ground truth), and second parameter is the predictions.

We will fix the user guide ASAP!

Thanks! Pin


User 1319 | 9/21/2015, 7:21:47 PM

Thanks Ping for your quick reply. Tarek