# 四种Python机器学习超参数搜索方法总结

• GridSearch
• RandomizedSearch
• HalvingGridSearch
• HalvingRandomSearch

### 原始模型

```# 数据读取
X = df.drop(columns=['output'])
y = df['output']

# 数据划分
x_train, x_test, y_train, y_test = train_test_split(X, y, stratify=y)

# 模型训练与计算准确率
clf = RandomForestClassifier(random_state=0)
clf.fit(x_train, y_train)
clf.score(x_test, y_test)
```

### GridSearch

GridSearch是比较基础的超参数搜索方法，中文名字网格搜索。其原理是在计算的过程中遍历所有的超参数组合，然后搜索到最优的结果。

```parameters = {
'max_depth': [2,4,5,6,7],
'min_samples_leaf': [1,2,3],
'min_weight_fraction_leaf': [0, 0.1],
'min_impurity_decrease': [0, 0.1, 0.2]
}

# Fitting 5 folds for each of 90 candidates, totalling 450 fits
clf = GridSearchCV(
RandomForestClassifier(random_state=0),
parameters, refit=True, verbose=1,
)
clf.fit(x_train, y_train)
clf.best_estimator_.score(x_test, y_test)
```

### RandomizedSearch

RandomizedSearch是在一定范围内进行搜索，且需要设置搜索的次数，其默认不会对所有的组合进行搜索。

n_iter代表超参数组合的个数，默认会设置比所有组合次数少的取值，如下面设置的为10，则只进行50次训练。

```parameters = {
'max_depth': [2,4,5,6,7],
'min_samples_leaf': [1,2,3],
'min_weight_fraction_leaf': [0, 0.1],
'min_impurity_decrease': [0, 0.1, 0.2]
}

clf = RandomizedSearchCV(
RandomForestClassifier(random_state=0),
parameters, refit=True, verbose=1, n_iter=10,
)

clf.fit(x_train, y_train)
clf.best_estimator_.score(x_test, y_test)
```

### HalvingGridSearch

HalvingGridSearch和GridSearch非常相似，但在迭代的过程中是有参数组合减半的操作。

HalvingGridSearch的思路和hyperband的思路非常相似，但是最朴素的实现。先使用少量数据筛选超参数组合，然后使用更多的数据验证精度。

```n_iterations: 3
n_required_iterations: 5
n_possible_iterations: 3
min_resources_: 20
max_resources_: 227
aggressive_elimination: False
factor: 3
----------

iter: 0
n_candidates: 90
n_resources: 20
Fitting 5 folds for each of 90 candidates, totalling 450 fits
----------

iter: 1
n_candidates: 30
n_resources: 60
Fitting 5 folds for each of 30 candidates, totalling 150 fits
----------

iter: 2
n_candidates: 10
n_resources: 180
Fitting 5 folds for each of 10 candidates, totalling 50 fits
----------
```

### HalvingRandomSearch

HalvingRandomSearch和HalvingGridSearch类似，都是逐步增加样本，减少超参数组合。但每次生成超参数组合，都是随机筛选的。

```n_iterations: 3
n_required_iterations: 3
n_possible_iterations: 3
min_resources_: 20
max_resources_: 227
aggressive_elimination: False
factor: 3
----------

iter: 0
n_candidates: 11
n_resources: 20
Fitting 5 folds for each of 11 candidates, totalling 55 fits
----------

iter: 1
n_candidates: 4
n_resources: 60
Fitting 5 folds for each of 4 candidates, totalling 20 fits
----------

iter: 2
n_candidates: 2
n_resources: 180
Fitting 5 folds for each of 2 candidates, totalling 10 fits
```

### 总结与对比

HalvingGridSearch和HalvingRandomSearch比较适合在数据量比较大的情况使用，可以提高训练速度。如果计算资源充足，GridSearch和HalvingGridSearch会得到更好的结果。