Image Classification - Search Space and HPO

While the Image Classification - Quick Start introduced basic usage of AutoGluon fit, evaluate, predict with default configurations, this tutorial dives into the various options that you can specify for more advanced control over the fitting process.

These options include: - Defining the search space of various hyperparameter values for the training of neural networks - Specifying how to search through your choosen hyperparameter space - Specifying how to schedule jobs to train a network under a particular hyperparameter configuration.

The advanced functionalities of AutoGluon enable you to use your external knowledge about your particular prediction problem and computing resources to guide the training process. If properly used, you may be able to achieve superior performance within less training time.

Tip: If you are new to AutoGluon, review Image Classification - Quick Start to learn the basics of the AutoGluon API.

We begin by letting AutoGluon know that `ImageClassification </api/autogluon.task.html#autogluon.task.ImageClassification>`__ is the task of interest:

import autogluon as ag
from autogluon import ImageClassification as task
/var/lib/jenkins/miniconda3/envs/autogluon_docs/lib/python3.7/site-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.metrics.classification module is  deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics. Anything that cannot be imported from sklearn.metrics is now part of the private API.
  warnings.warn(message, FutureWarning)

Create AutoGluon Dataset

Let’s first create the dataset using the same subset of the Shopee-IET dataset as the Image Classification - Quick Start tutorial. Recall that because we only specify the train_path, a 90/10 train/validation split is automatically performed.

filename = ag.download('https://autogluon.s3.amazonaws.com/datasets/shopee-iet.zip')
ag.unzip(filename)
'data'
dataset = task.Dataset('data/train')

Specify which Networks to Try

We start with specifying the pretrained neural network candidates. Given such a list, AutoGluon tries to train different networks from this list to identify the best-performing candidate. This is an example of a autogluon.space.Categorical search space, in which there are a limited number of values to choose from.

import gluoncv as gcv

@ag.func(
    multiplier=ag.Categorical(0.25, 0.5),
)
def get_mobilenet(multiplier):
    return gcv.model_zoo.MobileNetV2(multiplier=multiplier, classes=4)

net = ag.space.Categorical('mobilenet0.25', get_mobilenet())
print(net)
Categorical['mobilenet0.25', AutoGluonObject]

Specify the Optimizer and Its Search Space

Similarly, we can manually specify the optimizer candidates. We can construct another search space to identify which optimizer works best for our task, and also identify the best hyperparameter configurations for this optimizer. Additionally, we can customize the optimizer-specific hyperparameters search spaces, such as learning rate and weight decay using autogluon.space.Real.

from mxnet import optimizer as optim

@ag.obj(
    learning_rate=ag.space.Real(1e-4, 1e-2, log=True),
    momentum=ag.space.Real(0.85, 0.95),
    wd=ag.space.Real(1e-6, 1e-2, log=True)
)
class NAG(optim.NAG):
    pass

optimizer = NAG()
print(optimizer)
AutoGluonObject -- NAG

Search Algorithms

In AutoGluon, autogluon.searcher() supports different search search_strategys for both hyperparameter optimization and architecture search. Beyond simply specifying the space of hyperparameter configurations to search over, you can also tell AutoGluon what strategy it should employ to actually search through this space. This process of finding good hyperparameters from a given search space is commonly referred to as hyperparameter optimization (HPO) or hyperparameter tuning. autogluon.scheduler() orchestrates how individual training jobs are scheduled. We currently support random search, Hyperband, and Bayesian Optimization. Although these are simple techniques, they can be surprisingly powerful when parallelized, which can be easily enabled in AutoGluon.

Bayesian Optimization

Here is an example of using Bayesian Optimization using autogluon.searcher.SKoptSearcher.

Bayesian Optimization fits a probabilistic surrogate model to estimate the function that relates each hyperparameter configuration to the resulting performance of a model trained under this hyperparameter configuration.

You can specify what kind of surrogate model to use (e.g., Gaussian Process, Random Forest, etc.), in addition to which acquisition function to employ (e.g., Expected Improvement, Lower Confidence Bound, etc.). In the following, we tell fit to perform Bayesian optimization using a Random Forest surrogate model with acquisitions based on Expected Improvement. For more information, see autogluon.searcher.SKoptSearcher.

time_limits = 2*60
epochs = 2

classifier = task.fit(dataset,
                      net=net,
                      optimizer=optimizer,
                      search_strategy='skopt',
                      search_options={'base_estimator': 'RF', 'acq_func': 'EI'},
                      time_limits=time_limits,
                      epochs=epochs,
                      ngpus_per_trial=1)

print('Top-1 val acc: %.3f' % classifier.results[classifier.results['reward_attr']])
Starting Experiments
Num of Finished Tasks is 0
Num of Pending Tasks is 2
scheduler: FIFOScheduler(
DistributedResourceManager{
(Remote: Remote REMOTE_ID: 0,
    <Remote: 'inproc://172.31.45.231/1673/1' processes=1 threads=8, memory=33.24 GB>, Resource: NodeResourceManager(8 CPUs, 1 GPUs))
})
HBox(children=(FloatProgress(value=0.0, max=2.0), HTML(value='')))
[Epoch 2] Validation: 0.438: 100%|██████████| 2/2 [00:09<00:00,  4.86s/it]
Finished Task with config: {'net.1.multiplier.choice': 0, 'net.choice': 0, 'optimizer.learning_rate': 0.001, 'optimizer.momentum': 0.9, 'optimizer.wd': 0.0001} and reward: 0.4375
Finished Task with config: b'x80x03}qx00(Xx17x00x00x00net.1.multiplier.choiceqx01Kx00Xnx00x00x00net.choiceqx02Kx00Xx17x00x00x00optimizer.learning_rateqx03G?PbMxd2xf1xa9xfcXx12x00x00x00optimizer.momentumqx04G?xecxccxccxccxccxccxcdXx0cx00x00x00optimizer.wdqx05G?x1a6xe2xebx1cC-u.' and reward: 0.4375
Finished Task with config: b'x80x03}qx00(Xx17x00x00x00net.1.multiplier.choiceqx01Kx00Xnx00x00x00net.choiceqx02Kx00Xx17x00x00x00optimizer.learning_rateqx03G?PbMxd2xf1xa9xfcXx12x00x00x00optimizer.momentumqx04G?xecxccxccxccxccxccxcdXx0cx00x00x00optimizer.wdqx05G?x1a6xe2xebx1cC-u.' and reward: 0.4375
[Epoch 2] Validation: 0.331: 100%|██████████| 2/2 [00:14<00:00,  7.21s/it]
Finished Task with config: {'net.1.multiplier.choice': 1, 'net.choice': 1, 'optimizer.learning_rate': 0.00020100220269963, 'optimizer.momentum': 0.9370385643628532, 'optimizer.wd': 7.711098886540979e-05} and reward: 0.33125
Finished Task with config: b"x80x03}qx00(Xx17x00x00x00net.1.multiplier.choiceqx01Kx01Xnx00x00x00net.choiceqx02Kx01Xx17x00x00x00optimizer.learning_rateqx03G?*Xx83xc6'V{Xx12x00x00x00optimizer.momentumqx04G?xedxfc8Lxa0xefx94Xx0cx00x00x00optimizer.wdqx05G?x146xd4xb3x8cx1bx85u." and reward: 0.33125
Finished Task with config: b"x80x03}qx00(Xx17x00x00x00net.1.multiplier.choiceqx01Kx01Xnx00x00x00net.choiceqx02Kx01Xx17x00x00x00optimizer.learning_rateqx03G?*Xx83xc6'V{Xx12x00x00x00optimizer.momentumqx04G?xedxfc8Lxa0xefx94Xx0cx00x00x00optimizer.wdqx05G?x146xd4xb3x8cx1bx85u." and reward: 0.33125
[Epoch 2] training: accuracy=0.346: 100%|██████████| 2/2 [00:09<00:00,  4.80s/it]
Top-1 val acc: 0.438

Load the test dataset and evaluate:

test_dataset = task.Dataset('data/test', train=False)

test_acc = classifier.evaluate(test_dataset)
print('Top-1 test acc: %.3f' % test_acc)
accuracy: 0.34375: 100%|██████████| 1/1 [00:00<00:00,  1.21it/s]
Top-1 test acc: 0.344

Hyperband Early Stopping

AutoGluon currently supports scheduling trials in serial order and with early stopping (e.g., if the performance of the model early within training already looks bad, the trial may be terminated early to free up resources). Here is an example of using an early stopping scheduler autogluon.scheduler.HyperbandScheduler:

search_strategy = 'hyperband'

classifier = task.fit(dataset,
                      net=net,
                      optimizer=optimizer,
                      search_strategy=search_strategy,
                      epochs=epochs,
                      num_trials=2,
                      verbose=False,
                      plot_results=True,
                      ngpus_per_trial=1,
                      grace_period=1)

print('Top-1 val acc: %.3f' % classifier.results[classifier.results['reward_attr']])
Starting Experiments
Num of Finished Tasks is 0
Num of Pending Tasks is 2
scheduler: HyperbandScheduler(terminator: HyperbandStopping_Manager(reward_attr: classification_reward, time_attr: epoch, reduction_factor: 4, max_t: 2, brackets: [Bracket: Iter 1.000: None])
HBox(children=(FloatProgress(value=0.0, max=2.0), HTML(value='')))
[Epoch 2] training: accuracy=0.391:  50%|█████     | 1/2 [00:08<00:04,  4.87s/it]Finished Task with config: {'net.1.multiplier.choice': 0, 'net.choice': 0, 'optimizer.learning_rate': 0.001, 'optimizer.momentum': 0.9, 'optimizer.wd': 0.0001} and reward: 0.4375
[Epoch 2] training: accuracy=0.391:  50%|█████     | 1/2 [00:09<00:09,  9.61s/it]
[Epoch 1] training: accuracy=0.195:   0%|          | 0/2 [00:03<?, ?it/s]Finished Task with config: {'net.1.multiplier.choice': 0, 'net.choice': 0, 'optimizer.learning_rate': 0.00023311357505791257, 'optimizer.momentum': 0.9394554142361176, 'optimizer.wd': 0.0018895148397018308} and reward: 0.28125
[Epoch 1] training: accuracy=0.195:   0%|          | 0/2 [00:04<?, ?it/s]
[Epoch 2] training: accuracy=0.363: 100%|██████████| 2/2 [00:09<00:00,  4.71s/it]
Saving Training Curve in checkpoint/plot_training_curves.png
../../_images/output_hpo_19f8b4_14_6.png
Top-1 val acc: 0.438

The test top-1 accuracy are:

test_acc = classifier.evaluate(test_dataset)
print('Top-1 test acc: %.3f' % test_acc)
accuracy: 0.328125: 100%|██████████| 1/1 [00:00<00:00,  1.26it/s]
Top-1 test acc: 0.328

For a comparison of different search algorithms and scheduling strategies, see Search Algorithms. For more options using fit, see autogluon.task.ImageClassification.