AUC evaluator for parameter search

User 2198 | 8/28/2015, 4:44:43 AM

Having some issues writing my own AUC evaluator for doing GraphLab parameter search.

Here's my code:

def auc_score(model, test):
	target = model.get('target')
	preds = model.predict(test, output_type='class')
	return roc_auc_score(test[target], preds)

def evaluate_auc(model, train, test):
	``return {'train_auc': auc_score(model, train), 
		 'validation_auc': auc_score(model, test)}

job = gl.random_search.create( folds , gl.svm_classifier.create , params , evaluator=evaluate_auc , return_model=False , max_models=10 )

roc_auc_score comes from sklearn.metrics

The validation fails to run the trial script, and the error points to my evaluator. Is there something obvious that I am doing incorrectly?

Comments

User 1190 | 8/31/2015, 5:46:21 PM

Hi RyanCLouie,

The sklearn.metrics.rocaucscore expect numpy array or array like object. The output of predict(), and test[target] are graphlab.SArray, which doesn't quite satisfy an inmemory random access array interface.

Can you try changing your auc_score code to return roc_auc_score(numpy.asarray(test[target]), numpy.asarray(preds))

Thanks, -jay


User 2950 | 1/3/2016, 5:01:52 PM

Hi, I have a similar issue.

My function is:

def custom_evaluator(model, train, test):
	auc = model.evaluate(test, metric='auc')
return {'auc': auc}

I am able to use the above function for cross-validation, but when I use it for parameter search, the job result gives me the following error:

RuntimeError: Runtime Exception. Requested operation: Avg not supported on the type of column metric.auc

Any ideas to solve this issue? Thanks!


User 91 | 1/3/2016, 5:45:44 PM

The return type of evaluate is a dictionary. You probably want to do.

ret = model.evaluate(test, metric = 'auc') auc = ret['auc']


User 2950 | 1/19/2016, 7:10:04 AM

Yes, that works. Thanks for replying.