.. _sec_imgadvanced: Image Classification - Search Space and Hyperparameter Optimization (HPO) ========================================================================= While the :ref:`sec_imgquick` introduced basic usage of AutoGluon ``fit``, ``evaluate``, ``predict`` with default configurations, this tutorial dives into the various options that you can specify for more advanced control over the fitting process. These options include: - Defining the search space of various hyperparameter values for the training of neural networks - Specifying how to search through your choosen hyperparameter space - Specifying how to schedule jobs to train a network under a particular hyperparameter configuration. The advanced functionalities of AutoGluon enable you to use your external knowledge about your particular prediction problem and computing resources to guide the training process. If properly used, you may be able to achieve superior performance within less training time. **Tip**: If you are new to AutoGluon, review :ref:`sec_imgquick` to learn the basics of the AutoGluon API. We begin by letting AutoGluon know that `ImageClassification `__ is the task of interest: .. code:: python import autogluon.core as ag from autogluon.vision import ImageClassification as task Create AutoGluon Dataset ------------------------ Let's first create the dataset using the same subset of the ``Shopee-IET`` dataset as the :ref:`sec_imgquick` tutorial. Recall that because we only specify the ``train_path``, a 90/10 train/validation split is automatically performed. .. code:: python filename = ag.download('https://autogluon.s3.amazonaws.com/datasets/shopee-iet.zip') ag.unzip(filename) .. parsed-literal:: :class: output 'data' .. code:: python dataset = task.Dataset('data/train') Specify which Networks to Try ----------------------------- We start with specifying the pretrained neural network candidates. Given such a list, AutoGluon tries to train different networks from this list to identify the best-performing candidate. This is an example of a :class:`autogluon.core.space.Categorical` search space, in which there are a limited number of values to choose from. .. code:: python import gluoncv as gcv @ag.func( multiplier=ag.Categorical(0.25, 0.5), ) def get_mobilenet(multiplier): return gcv.model_zoo.MobileNetV2(multiplier=multiplier, classes=4) net = ag.space.Categorical('mobilenet0.25', get_mobilenet()) print(net) .. parsed-literal:: :class: output Categorical['mobilenet0.25', AutoGluonObject] Specify the Optimizer and Its Search Space ------------------------------------------ Similarly, we can manually specify the optimizer candidates. We can construct another search space to identify which optimizer works best for our task, and also identify the best hyperparameter configurations for this optimizer. Additionally, we can customize the optimizer-specific hyperparameters search spaces, such as learning rate and weight decay using :class:`autogluon.core.space.Real`. .. code:: python from mxnet import optimizer as optim @ag.obj( learning_rate=ag.space.Real(1e-4, 1e-2, log=True), momentum=ag.space.Real(0.85, 0.95), wd=ag.space.Real(1e-6, 1e-2, log=True) ) class NAG(optim.NAG): pass optimizer = NAG() print(optimizer) .. parsed-literal:: :class: output AutoGluonObject -- NAG Search Algorithms ----------------- In AutoGluon, ``autogluon.core.searcher`` supports different search search strategies for both hyperparameter optimization and architecture search. Beyond simply specifying the space of hyperparameter configurations to search over, you can also tell AutoGluon what strategy it should employ to actually search through this space. This process of finding good hyperparameters from a given search space is commonly referred to as *hyperparameter optimization* (HPO) or *hyperparameter tuning*. ``autogluon.core.scheduler`` orchestrates how individual training jobs are scheduled. We currently support FIFO (standard) and Hyperband scheduling, along with search by random sampling or Bayesian optimization. These basic techniques are rendered surprisingly powerful by AutoGluon's support of asynchronous parallel execution. Bayesian Optimization ~~~~~~~~~~~~~~~~~~~~~ Here is an example of using Bayesian Optimization using :class:`autogluon.core.searcher.SKoptSearcher`. Bayesian Optimization fits a probabilistic *surrogate model* to estimate the function that relates each hyperparameter configuration to the resulting performance of a model trained under this hyperparameter configuration. You can specify what kind of surrogate model to use (e.g., Gaussian Process, Random Forest, etc.), in addition to which acquisition function to employ (e.g., Expected Improvement, Lower Confidence Bound, etc.). In the following, we tell ``fit`` to perform Bayesian optimization using a Random Forest surrogate model with acquisitions based on Expected Improvement. For more information, see :class:`autogluon.core.searcher.SKoptSearcher`. .. code:: python time_limits = 2*60 epochs = 2 classifier = task.fit(dataset, net=net, optimizer=optimizer, search_strategy='skopt', search_options={'base_estimator': 'RF', 'acq_func': 'EI'}, time_limits=time_limits, epochs=epochs, ngpus_per_trial=1, num_trials=2) print('Top-1 val acc: %.3f' % classifier.results[classifier.results['reward_attr']]) .. parsed-literal:: :class: output scheduler: FIFOScheduler( DistributedResourceManager{ (Remote: Remote REMOTE_ID: 0, , Resource: NodeResourceManager(8 CPUs, 1 GPUs)) }) .. parsed-literal:: :class: output HBox(children=(HTML(value=''), FloatProgress(value=0.0, max=2.0), HTML(value=''))) .. parsed-literal:: :class: output [Epoch 2] Validation: 0.444: 50%|█████ | 1/2 [00:09<00:04, 4.90s/it] [Epoch 2] Validation: 0.444: 100%|██████████| 2/2 [00:09<00:00, 4.89s/it] .. parsed-literal:: :class: output .. parsed-literal:: :class: output [Epoch 2] Validation: 0.219: 50%|█████ | 1/2 [00:15<00:10, 10.28s/it] [Epoch 2] training: accuracy=0.351: 100%|██████████| 2/2 [00:09<00:00, 4.83s/it] .. parsed-literal:: :class: output Top-1 val acc: 0.444 Load the test dataset and evaluate: .. code:: python test_dataset = task.Dataset('data/test', train=False) test_acc = classifier.evaluate(test_dataset) print('Top-1 test acc: %.3f' % test_acc) .. parsed-literal:: :class: output accuracy: 0.421875: 100%|██████████| 1/1 [00:00<00:00, 2.82it/s] .. parsed-literal:: :class: output Top-1 test acc: 0.422 .. parsed-literal:: :class: output Note that ``num_trials=2`` above is only used to speed up the tutorial. In normal practice, it is common to only use ``time_limits`` and drop ``num_trials``. Hyperband Early Stopping ~~~~~~~~~~~~~~~~~~~~~~~~ AutoGluon currently supports scheduling trials in serial order and with early stopping (e.g., if the performance of the model early within training already looks bad, the trial may be terminated early to free up resources). Here is an example of using an early stopping scheduler :class:`autogluon.core.scheduler.HyperbandScheduler`. ``scheduler_options`` is used to configure the scheduler. In this example, we run Hyperband with a single bracket, and stop/go decisions are made after 1 and 2 epochs (``grace_period``, ``grace_period * reduction_factor``): .. code:: python search_strategy = 'hyperband' scheduler_options = { 'grace_period': 1, 'reduction_factor': 2, 'brackets': 1} classifier = task.fit(dataset, net=net, optimizer=optimizer, search_strategy=search_strategy, epochs=4, num_trials=2, verbose=False, plot_results=True, ngpus_per_trial=1, scheduler_options=scheduler_options) print('Top-1 val acc: %.3f' % classifier.results[classifier.results['reward_attr']]) .. parsed-literal:: :class: output scheduler: HyperbandScheduler(terminator: HyperbandStopping_Manager(reward_attr: classification_reward, time_attr: epoch, reduction_factor: 2, max_t: 4, brackets: [Bracket: Iter 4.000: None | Iter 2.000: None | Iter 1.000: None]) .. parsed-literal:: :class: output HBox(children=(HTML(value=''), FloatProgress(value=0.0, max=2.0), HTML(value=''))) .. parsed-literal:: :class: output [Epoch 4] training: accuracy=0.581: 75%|███████▌ | 3/4 [00:19<00:06, 6.58s/it] .. parsed-literal:: :class: output .. parsed-literal:: :class: output [Epoch 1] training: accuracy=0.273: 0%| | 0/4 [00:10