Deeplearning evaluation metrics

User 794 | 1/12/2015, 10:08:33 PM

Is there any documentation on which metrics can be used to evaluate the neuralnet_classifier? The documentation in the help docs doesn't provide an exhaustive list. I'm curious if there is a list anywhere.

Also, is log-loss an available metric for the neuralnet_classifier?

Thank you -Miroslaw


User 1190 | 1/12/2015, 10:38:31 PM


A list of available metrics during training can be found in the following documentation page. Specifically, it supports {‘accuracy’, ‘error’, ‘recall@K`}

Once the model is trained, you will have access to some additional metrics like "confusionmatrix", etc.

log-loss is not available yet. Fee free to submit a feature request for it.

Best, jay

User 794 | 1/14/2015, 7:27:49 PM

Thank you Jay.

Out of curiosity. Is there a way for us to develop our own metrics to be used during training time?

User 1190 | 1/14/2015, 7:47:13 PM

What other metrics are you interested in besides log-loss? To support training time customized metrics, we will need to create a common C++ metric interface and include it in the SDK. I think it is quite important and it will be on our road map. Thanks for your feedback. Once the model is trained, you can write your own evaluation code against the prediction and the label either in python or using C++ SDK.

User 5236 | 5/29/2016, 7:21:55 AM

I know it's an old thread but I'm positing here as someone else might also find it helpful someday... can someone please explain what does the "k" mean in the "recall@k" measure?

Thanks, Ran

User 5159 | 5/30/2016, 2:00:48 AM

for example, if there is 1000 classes, the target label is 6, if 6 is in the top k prediction of our result, we will treat this as correct prediction