python gridsearchcv 里的评价准则

http://scikit-learn.org/stable/modules/model_evaluation.html

3.3.1. The scoring parameter: defining model evaluation rules

Model selection and evaluation using tools, such as model_selection.GridSearchCV and model_selection.cross_val_score, take a scoring parameter that controls what metric they apply to the estimators evaluated.

3.3.1.1. Common cases: predefined values

For the most common use cases, you can designate a scorer object with the scoring parameter; the table below shows all possible values. All scorer objects follow the convention that higher return values are better than lower return values. Thus metrics which measure the distance between the model and the data, like metrics.mean_squared_error, are available as neg_mean_squared_error which return the negated value of the metric.

ScoringFunctionComment
Classification
‘accuracy’metrics.accuracy_score
‘average_precision’metrics.average_precision_score
‘f1’metrics.f1_scorefor binary targets
‘f1_micro’metrics.f1_scoremicro-averaged
‘f1_macro’metrics.f1_scoremacro-averaged
‘f1_weighted’metrics.f1_scoreweighted average
‘f1_samples’metrics.f1_scoreby multilabel sample
‘neg_log_loss’metrics.log_lossrequires predict_proba support
‘precision’ etc.metrics.precision_scoresuffixes apply as with ‘f1’
‘recall’ etc.metrics.recall_scoresuffixes apply as with ‘f1’
‘roc_auc’metrics.roc_auc_score
Clustering
‘adjusted_rand_score’metrics.adjusted_rand_score
Regression
‘neg_mean_absolute_error’metrics.mean_absolute_error
‘neg_mean_squared_error’metrics.mean_squared_error
‘neg_median_absolute_error’metrics.median_absolute_error
‘r2’metrics.r2_score

Usage examples: