autogluon.searcher

Example

Define a dummy training function with searchable spaces:

>>> import numpy as np
>>> import autogluon as ag
>>> @ag.args(
...     lr=ag.space.Real(1e-3, 1e-2, log=True),
...     wd=ag.space.Real(1e-3, 1e-2))
>>> def train_fn(args, reporter):
...     print('lr: {}, wd: {}'.format(args.lr, args.wd))
...     for e in range(10):
...         dummy_accuracy = 1 - np.power(1.8, -np.random.uniform(e, 2*e))
...         reporter(epoch=e+1, accuracy=dummy_accuracy, lr=args.lr, wd=args.wd)

Note that epoch returned by reporter must be the number of epochs done, and start with 1. Create a searcher and sample one configuration:

>>> searcher = ag.searcher.SKoptSearcher(train_fn.cs)
>>> searcher.get_config()
{'lr': 0.0031622777, 'wd': 0.0055}

Create a scheduler and run this toy experiment:

>>> scheduler = ag.scheduler.FIFOScheduler(train_fn,
...                                        searcher = searcher,
...                                        resource={'num_cpus': 2, 'num_gpus': 0},
...                                        num_trials=10,
...                                        reward_attr='accuracy',
...                                        time_attr='epoch')
>>> scheduler.run()

When working with FIFOScheduler or HyperbandScheduler, it is recommended to specify the searcher by the searcher argument (string identifier) and search_options, instead of creating the searcher object by hand:

>>> scheduler = ag.scheduler.FIFOScheduler(train_fn,
...                                        searcher = 'skopt',
...                                        resource={'num_cpus': 2, 'num_gpus': 0},
...                                        num_trials=10,
...                                        reward_attr='accuracy',
...                                        time_attr='epoch')

Visualize the results:

>>> scheduler.get_training_curves(plot=True)
https://raw.githubusercontent.com/zhanghang1989/AutoGluonWebdata/master/doc/api/autogluon.searcher.1.png

Searchers

RandomSearcher

Searcher which randomly samples configurations to try next.

SKoptSearcher

SKopt Searcher that uses Bayesian optimization to suggest new hyperparameter configurations.

GridSearcher

Grid Searcher that exhaustively tries all possible configurations.

RLSearcher

Reinforcement Learning Searcher for ConfigSpace

RandomSearcher

class autogluon.searcher.RandomSearcher(configspace, **kwargs)

Searcher which randomly samples configurations to try next.

Parameters
configspace: ConfigSpace.ConfigurationSpace

The configuration space to sample from. It contains the full specification of the set of hyperparameter values (with optional prior distributions over these values).

Examples

By default, the searcher is created along with the scheduler. For example:

>>> import autogluon as ag
>>> @ag.args(
...     lr=ag.space.Real(1e-3, 1e-2, log=True))
>>> def train_fn(args, reporter):
...     reporter(accuracy = args.lr ** 2)
>>> scheduler = ag.scheduler.FIFOScheduler(
...     train_fn, searcher='random', num_trials=10,
...     reward_attr='accuracy')

This would result in a BaseSearcher with cs = train_fn.cs. You can also create a RandomSearcher by hand:

>>> import ConfigSpace as CS
>>> import ConfigSpace.hyperparameters as CSH
>>> # create configuration space
>>> cs = CS.ConfigurationSpace()
>>> lr = CSH.UniformFloatHyperparameter('lr', lower=1e-4, upper=1e-1, log=True)
>>> cs.add_hyperparameter(lr)
>>> # create searcher
>>> searcher = RandomSearcher(cs)
>>> searcher.get_config()

Methods

clone_from_state(self, state)

Together with get_state, this is needed in order to store and re-create the mutable state of the searcher.

configure_scheduler(self, scheduler)

Some searchers need to obtain information from the scheduler they are used with, in order to configure themselves.

cumulative_profile_record(self)

If profiling is supported and active, the searcher accumulates profiling information over get_config calls, the corresponding dict is returned here.

dataset_size(self)

return

Size of dataset a model is fitted to, or 0 if no model is

evaluation_failed(self, config, \*\*kwargs)

Called by scheduler if an evaluation job for config failed.

get_best_config(self)

Returns the best configuration found so far.

get_best_config_reward(self)

Returns the best configuration found so far, as well as the reward associated with this best config.

get_best_reward(self)

Calculates the reward (i.e.

get_config(self, \*\*kwargs)

Sample a new configuration at random

get_reward(self, config)

Calculates the reward (i.e.

get_state(self)

Together with clone_from_state, this is needed in order to store and re-create the mutable state of the searcher.

model_parameters(self)

return

Dictionary with current model (hyper)parameter values if

register_pending(self, config[, milestone])

Signals to searcher that evaluation for config has started, but not yet finished, which allows model-based searchers to register this evaluation as pending.

remove_case(self, config, \*\*kwargs)

Remove data case previously appended by update

update(self, config, \*\*kwargs)

Update the searcher with the newest metric report

clone_from_state(self, state)

Together with get_state, this is needed in order to store and re-create the mutable state of the searcher.

Given state as returned by get_state, this method combines the non-pickle-able part of the immutable state from self with state and returns the corresponding searcher clone. Afterwards, self is not used anymore.

If the searcher object as such is already pickle-able, then state is already the new searcher object, and the default is just returning it. In this default, self is ignored.

Parameters

state – See above

Returns

New searcher object

configure_scheduler(self, scheduler)

Some searchers need to obtain information from the scheduler they are used with, in order to configure themselves. This method has to be called before the searcher can be used.

The implementation here sets _reward_attribute for schedulers which specify it.

Args:
scheduler: TaskScheduler

Scheduler the searcher is used with.

cumulative_profile_record(self)

If profiling is supported and active, the searcher accumulates profiling information over get_config calls, the corresponding dict is returned here.

dataset_size(self)
Returns

Size of dataset a model is fitted to, or 0 if no model is fitted to data

evaluation_failed(self, config, **kwargs)

Called by scheduler if an evaluation job for config failed. The searcher should react appropriately (e.g., remove pending evaluations for this config, and blacklist config).

get_best_config(self)

Returns the best configuration found so far.

get_best_config_reward(self)

Returns the best configuration found so far, as well as the reward associated with this best config.

get_best_reward(self)

Calculates the reward (i.e. validation performance) produced by training under the best configuration identified so far. Assumes higher reward values indicate better performance.

get_config(self, **kwargs)

Sample a new configuration at random

Returns
A new configuration that is valid.
get_reward(self, config)

Calculates the reward (i.e. validation performance) produced by training with the given configuration.

get_state(self)

Together with clone_from_state, this is needed in order to store and re-create the mutable state of the searcher.

The state returned here must be pickle-able. If the searcher object is pickle-able, the default is returning self.

Returns

Pickle-able mutable state of searcher

model_parameters(self)
Returns

Dictionary with current model (hyper)parameter values if this is supported; otherwise empty

register_pending(self, config, milestone=None)

Signals to searcher that evaluation for config has started, but not yet finished, which allows model-based searchers to register this evaluation as pending. For multi-fidelity schedulers, milestone is the next milestone the evaluation will attend, so that model registers (config, milestone) as pending. In general, the searcher may assume that update is called with that config at a later time.

remove_case(self, config, **kwargs)

Remove data case previously appended by update

For searchers which maintain the dataset of all cases (reports) passed to update, this method allows to remove one case from the dataset.

update(self, config, **kwargs)

Update the searcher with the newest metric report

kwargs must include the reward (key == reward_attribute). For multi-fidelity schedulers (e.g., Hyperband), intermediate results are also reported. In this case, kwargs must also include the resource (key == resource_attribute). We can also assume that if register_pending(config, …) is received, then later on, the searcher receives update(config, …) with milestone as resource.

Note that for Hyperband scheduling, update is also called for intermediate results. _results is updated in any case, if the new reward value is larger than the previously recorded one. This implies that the best value for a config (in _results) could be obtained for an intermediate resource, not the final one (virtue of early stopping). Full details can be reconstruction from training_history of the scheduler.

SKoptSearcher

class autogluon.searcher.SKoptSearcher(configspace, **kwargs)
SKopt Searcher that uses Bayesian optimization to suggest new hyperparameter configurations.

Requires that ‘scikit-optimize’ package is installed.

Parameters
configspace: ConfigSpace.ConfigurationSpace

The configuration space to sample from. It contains the full specification of the Hyperparameters with their priors

kwargs: Optional arguments passed to skopt.optimizer.Optimizer class.

Please see documentation at this link: skopt.optimizer.Optimizer These kwargs be used to specify which surrogate model Bayesian optimization should rely on, which acquisition function to use, how to optimize the acquisition function, etc. The skopt library provides comprehensive Bayesian optimization functionality, where popular non-default kwargs options here might include:

  • base_estimator = ‘GP’ or ‘RF’ or ‘ET’ or ‘GBRT’ (to specify different surrogate models like Gaussian Processes, Random Forests, etc)

  • acq_func = ‘LCB’ or ‘EI’ or ‘PI’ or ‘gp_hedge’ (to specify different acquisition functions like Lower Confidence Bound, Expected Improvement, etc)

For example, we can tell our Searcher to perform Bayesian optimization with a Random Forest surrogate model and use the Expected Improvement acquisition function by invoking: SKoptSearcher(cs, base_estimator=’RF’, acq_func=’EI’)

Examples

By default, the searcher is created along with the scheduler. For example:

>>> import autogluon as ag
>>> @ag.args(
...     lr=ag.space.Real(1e-3, 1e-2, log=True))
>>> def train_fn(args, reporter):
...     reporter(accuracy = args.lr ** 2)
>>> search_options = {'base_estimator': 'RF', 'acq_func': 'EI'}
>>> scheduler = ag.scheduler.FIFOScheduler(
...     train_fn, searcher='skopt', search_options=search_options,
...     num_trials=10, reward_attr='accuracy')

This would result in a SKoptSearcher with cs = train_fn.cs. You can also create a SKoptSearcher by hand:

>>> import autogluon as ag
>>> @ag.args(
...     lr=ag.space.Real(1e-3, 1e-2, log=True),
...     wd=ag.space.Real(1e-3, 1e-2))
>>> def train_fn(args, reporter):
...     pass
>>> searcher = ag.searcher.SKoptSearcher(train_fn.cs)
>>> searcher.get_config()
{'lr': 0.0031622777, 'wd': 0.0055}
>>> searcher = SKoptSearcher(
>>>     train_fn.cs, reward_attribute='accuracy', base_estimator='RF',
>>>     acq_func='EI')
>>> next_config = searcher.get_config()
>>> searcher.update(next_config, accuracy=10.0)  # made-up value

Note

  • get_config() cannot ensure valid configurations for conditional spaces since skopt

does not contain this functionality as it is not integrated with ConfigSpace. If invalid config is produced, SKoptSearcher.get_config() will catch these Exceptions and revert to random_config() instead.

  • get_config(max_tries) uses skopt’s batch BayesOpt functionality to query at most

max_tries number of configs to try out. If all of these have configs have already been scheduled to try (might happen in asynchronous setting), then get_config simply reverts to random search via random_config().

Methods

clone_from_state(self, state)

Together with get_state, this is needed in order to store and re-create the mutable state of the searcher.

config2skopt(self, config)

Converts autogluon config (dict object) to skopt format (list object).

configure_scheduler(self, scheduler)

Some searchers need to obtain information from the scheduler they are used with, in order to configure themselves.

cumulative_profile_record(self)

If profiling is supported and active, the searcher accumulates profiling information over get_config calls, the corresponding dict is returned here.

dataset_size(self)

return

Size of dataset a model is fitted to, or 0 if no model is

default_config(self)

Function to return the default configuration that should be tried first.

evaluation_failed(self, config, \*\*kwargs)

Called by scheduler if an evaluation job for config failed.

get_best_config(self)

Returns the best configuration found so far.

get_best_config_reward(self)

Returns the best configuration found so far, as well as the reward associated with this best config.

get_best_reward(self)

Calculates the reward (i.e.

get_config(self, \*\*kwargs)

Function to sample a new configuration This function is called to query a new configuration that has not yet been tried.

get_reward(self, config)

Calculates the reward (i.e.

get_state(self)

Together with clone_from_state, this is needed in order to store and re-create the mutable state of the searcher.

model_parameters(self)

return

Dictionary with current model (hyper)parameter values if

random_config(self)

Function to randomly sample a new configuration (which is ensured to be valid in the case of conditional hyperparameter spaces).

register_pending(self, config[, milestone])

Signals to searcher that evaluation for config has started, but not yet finished, which allows model-based searchers to register this evaluation as pending.

remove_case(self, config, \*\*kwargs)

Remove data case previously appended by update

skopt2config(self, point)

Converts skopt point (list object) to autogluon config format (dict object.

update(self, config, \*\*kwargs)

Update the searcher with the newest metric report.

clone_from_state(self, state)

Together with get_state, this is needed in order to store and re-create the mutable state of the searcher.

Given state as returned by get_state, this method combines the non-pickle-able part of the immutable state from self with state and returns the corresponding searcher clone. Afterwards, self is not used anymore.

If the searcher object as such is already pickle-able, then state is already the new searcher object, and the default is just returning it. In this default, self is ignored.

Parameters

state – See above

Returns

New searcher object

config2skopt(self, config)

Converts autogluon config (dict object) to skopt format (list object).

Returns
Object of same type as: skOpt.Optimizer.ask()
configure_scheduler(self, scheduler)

Some searchers need to obtain information from the scheduler they are used with, in order to configure themselves. This method has to be called before the searcher can be used.

The implementation here sets _reward_attribute for schedulers which specify it.

Args:
scheduler: TaskScheduler

Scheduler the searcher is used with.

cumulative_profile_record(self)

If profiling is supported and active, the searcher accumulates profiling information over get_config calls, the corresponding dict is returned here.

dataset_size(self)
Returns

Size of dataset a model is fitted to, or 0 if no model is fitted to data

default_config(self)

Function to return the default configuration that should be tried first.

Returns
returns: config
evaluation_failed(self, config, **kwargs)

Called by scheduler if an evaluation job for config failed. The searcher should react appropriately (e.g., remove pending evaluations for this config, and blacklist config).

get_best_config(self)

Returns the best configuration found so far.

get_best_config_reward(self)

Returns the best configuration found so far, as well as the reward associated with this best config.

get_best_reward(self)

Calculates the reward (i.e. validation performance) produced by training under the best configuration identified so far. Assumes higher reward values indicate better performance.

get_config(self, **kwargs)

Function to sample a new configuration This function is called to query a new configuration that has not yet been tried. Asks for one point at a time from skopt, up to max_tries. If an invalid hyperparameter configuration is proposed by skopt, then reverts to random search (since skopt configurations cannot handle conditional spaces like ConfigSpace can). TODO: may loop indefinitely due to no termination condition (like RandomSearcher.get_config() )

Parameters
max_tries: int, default = 1e2

The maximum number of tries to ask for a unique config from skopt before reverting to random search.

get_reward(self, config)

Calculates the reward (i.e. validation performance) produced by training with the given configuration.

get_state(self)

Together with clone_from_state, this is needed in order to store and re-create the mutable state of the searcher.

The state returned here must be pickle-able. If the searcher object is pickle-able, the default is returning self.

Returns

Pickle-able mutable state of searcher

model_parameters(self)
Returns

Dictionary with current model (hyper)parameter values if this is supported; otherwise empty

random_config(self)

Function to randomly sample a new configuration (which is ensured to be valid in the case of conditional hyperparameter spaces).

register_pending(self, config, milestone=None)

Signals to searcher that evaluation for config has started, but not yet finished, which allows model-based searchers to register this evaluation as pending. For multi-fidelity schedulers, milestone is the next milestone the evaluation will attend, so that model registers (config, milestone) as pending. In general, the searcher may assume that update is called with that config at a later time.

remove_case(self, config, **kwargs)

Remove data case previously appended by update

For searchers which maintain the dataset of all cases (reports) passed to update, this method allows to remove one case from the dataset.

skopt2config(self, point)

Converts skopt point (list object) to autogluon config format (dict object.

Returns
Object of same type as: RandomSampling.configspace.sample_configuration().get_dictionary()
update(self, config, **kwargs)

Update the searcher with the newest metric report.

GridSearcher

class autogluon.searcher.GridSearcher(configspace, **kwargs)
Grid Searcher that exhaustively tries all possible configurations.

This Searcher can only be used for discrete search spaces of type autogluon.space.Categorical

Examples

>>> import autogluon as ag
>>> @ag.args(
...     x=ag.space.Categorical(0, 1, 2),
...     y=ag.space.Categorical('a', 'b', 'c'))
>>> def train_fn(args, reporter):
...     pass
>>> searcher = ag.searcher.GridSearcher(train_fn.cs)
>>> searcher.get_config()
Number of configurations for grid search is 9
{'x.choice': 2, 'y.choice': 2}

Methods

clone_from_state(self, state)

Together with get_state, this is needed in order to store and re-create the mutable state of the searcher.

configure_scheduler(self, scheduler)

Some searchers need to obtain information from the scheduler they are used with, in order to configure themselves.

cumulative_profile_record(self)

If profiling is supported and active, the searcher accumulates profiling information over get_config calls, the corresponding dict is returned here.

dataset_size(self)

return

Size of dataset a model is fitted to, or 0 if no model is

evaluation_failed(self, config, \*\*kwargs)

Called by scheduler if an evaluation job for config failed.

get_best_config(self)

Returns the best configuration found so far.

get_best_config_reward(self)

Returns the best configuration found so far, as well as the reward associated with this best config.

get_best_reward(self)

Calculates the reward (i.e.

get_config(self)

Return new hyperparameter configuration to try next.

get_reward(self, config)

Calculates the reward (i.e.

get_state(self)

Together with clone_from_state, this is needed in order to store and re-create the mutable state of the searcher.

model_parameters(self)

return

Dictionary with current model (hyper)parameter values if

register_pending(self, config[, milestone])

Signals to searcher that evaluation for config has started, but not yet finished, which allows model-based searchers to register this evaluation as pending.

remove_case(self, config, \*\*kwargs)

Remove data case previously appended by update

update(self, config, \*\*kwargs)

Update the searcher with the newest metric report

clone_from_state(self, state)

Together with get_state, this is needed in order to store and re-create the mutable state of the searcher.

Given state as returned by get_state, this method combines the non-pickle-able part of the immutable state from self with state and returns the corresponding searcher clone. Afterwards, self is not used anymore.

If the searcher object as such is already pickle-able, then state is already the new searcher object, and the default is just returning it. In this default, self is ignored.

Parameters

state – See above

Returns

New searcher object

configure_scheduler(self, scheduler)

Some searchers need to obtain information from the scheduler they are used with, in order to configure themselves. This method has to be called before the searcher can be used.

The implementation here sets _reward_attribute for schedulers which specify it.

Args:
scheduler: TaskScheduler

Scheduler the searcher is used with.

cumulative_profile_record(self)

If profiling is supported and active, the searcher accumulates profiling information over get_config calls, the corresponding dict is returned here.

dataset_size(self)
Returns

Size of dataset a model is fitted to, or 0 if no model is fitted to data

evaluation_failed(self, config, **kwargs)

Called by scheduler if an evaluation job for config failed. The searcher should react appropriately (e.g., remove pending evaluations for this config, and blacklist config).

get_best_config(self)

Returns the best configuration found so far.

get_best_config_reward(self)

Returns the best configuration found so far, as well as the reward associated with this best config.

get_best_reward(self)

Calculates the reward (i.e. validation performance) produced by training under the best configuration identified so far. Assumes higher reward values indicate better performance.

get_config(self)

Return new hyperparameter configuration to try next.

get_reward(self, config)

Calculates the reward (i.e. validation performance) produced by training with the given configuration.

get_state(self)

Together with clone_from_state, this is needed in order to store and re-create the mutable state of the searcher.

The state returned here must be pickle-able. If the searcher object is pickle-able, the default is returning self.

Returns

Pickle-able mutable state of searcher

model_parameters(self)
Returns

Dictionary with current model (hyper)parameter values if this is supported; otherwise empty

register_pending(self, config, milestone=None)

Signals to searcher that evaluation for config has started, but not yet finished, which allows model-based searchers to register this evaluation as pending. For multi-fidelity schedulers, milestone is the next milestone the evaluation will attend, so that model registers (config, milestone) as pending. In general, the searcher may assume that update is called with that config at a later time.

remove_case(self, config, **kwargs)

Remove data case previously appended by update

For searchers which maintain the dataset of all cases (reports) passed to update, this method allows to remove one case from the dataset.

update(self, config, **kwargs)

Update the searcher with the newest metric report

kwargs must include the reward (key == reward_attribute). For multi-fidelity schedulers (e.g., Hyperband), intermediate results are also reported. In this case, kwargs must also include the resource (key == resource_attribute). We can also assume that if register_pending(config, …) is received, then later on, the searcher receives update(config, …) with milestone as resource.

Note that for Hyperband scheduling, update is also called for intermediate results. _results is updated in any case, if the new reward value is larger than the previously recorded one. This implies that the best value for a config (in _results) could be obtained for an intermediate resource, not the final one (virtue of early stopping). Full details can be reconstruction from training_history of the scheduler.

RLSearcher

class autogluon.searcher.RLSearcher(kwspaces, ctx=cpu(0), controller_type='lstm', **kwargs)

Reinforcement Learning Searcher for ConfigSpace

Parameters
kwspaces: keyword search spaces

The keyword spaces automatically generated by autogluon.args()

Examples

>>> import autogluon as ag
>>> @ag.args(
>>>     lr=ag.space.Real(1e-3, 1e-2, log=True),
>>>     wd=ag.space.Real(1e-3, 1e-2))
>>> def train_fn(args, reporter)
>>>     pass
>>> searcher = RLSearcher(train_fn.kwspaces)
>>> searcher.get_config()

Methods

clone_from_state(self, state)

Together with get_state, this is needed in order to store and re-create the mutable state of the searcher.

configure_scheduler(self, scheduler)

Some searchers need to obtain information from the scheduler they are used with, in order to configure themselves.

cumulative_profile_record(self)

If profiling is supported and active, the searcher accumulates profiling information over get_config calls, the corresponding dict is returned here.

dataset_size(self)

return

Size of dataset a model is fitted to, or 0 if no model is

evaluation_failed(self, config, \*\*kwargs)

Called by scheduler if an evaluation job for config failed.

get_best_config(self)

Returns the best configuration found so far.

get_best_config_reward(self)

Returns the best configuration found so far, as well as the reward associated with this best config.

get_best_reward(self)

Calculates the reward (i.e.

get_config(self, \*\*kwargs)

Function to sample a new configuration

get_reward(self, config)

Calculates the reward (i.e.

get_state(self)

Together with clone_from_state, this is needed in order to store and re-create the mutable state of the searcher.

model_parameters(self)

return

Dictionary with current model (hyper)parameter values if

register_pending(self, config[, milestone])

Signals to searcher that evaluation for config has started, but not yet finished, which allows model-based searchers to register this evaluation as pending.

remove_case(self, config, \*\*kwargs)

Remove data case previously appended by update

update(self, config, \*\*kwargs)

Update the searcher with the newest metric report

load_state_dict

state_dict

clone_from_state(self, state)

Together with get_state, this is needed in order to store and re-create the mutable state of the searcher.

Given state as returned by get_state, this method combines the non-pickle-able part of the immutable state from self with state and returns the corresponding searcher clone. Afterwards, self is not used anymore.

If the searcher object as such is already pickle-able, then state is already the new searcher object, and the default is just returning it. In this default, self is ignored.

Parameters

state – See above

Returns

New searcher object

configure_scheduler(self, scheduler)

Some searchers need to obtain information from the scheduler they are used with, in order to configure themselves. This method has to be called before the searcher can be used.

The implementation here sets _reward_attribute for schedulers which specify it.

Args:
scheduler: TaskScheduler

Scheduler the searcher is used with.

cumulative_profile_record(self)

If profiling is supported and active, the searcher accumulates profiling information over get_config calls, the corresponding dict is returned here.

dataset_size(self)
Returns

Size of dataset a model is fitted to, or 0 if no model is fitted to data

evaluation_failed(self, config, **kwargs)

Called by scheduler if an evaluation job for config failed. The searcher should react appropriately (e.g., remove pending evaluations for this config, and blacklist config).

get_best_config(self)

Returns the best configuration found so far.

get_best_config_reward(self)

Returns the best configuration found so far, as well as the reward associated with this best config.

get_best_reward(self)

Calculates the reward (i.e. validation performance) produced by training under the best configuration identified so far. Assumes higher reward values indicate better performance.

get_config(self, **kwargs)

Function to sample a new configuration

This function is called inside TaskScheduler to query a new configuration

Args: kwargs:

Extra information may be passed from scheduler to searcher

returns: (config, info_dict)

must return a valid configuration and a (possibly empty) info dict

get_reward(self, config)

Calculates the reward (i.e. validation performance) produced by training with the given configuration.

get_state(self)

Together with clone_from_state, this is needed in order to store and re-create the mutable state of the searcher.

The state returned here must be pickle-able. If the searcher object is pickle-able, the default is returning self.

Returns

Pickle-able mutable state of searcher

model_parameters(self)
Returns

Dictionary with current model (hyper)parameter values if this is supported; otherwise empty

register_pending(self, config, milestone=None)

Signals to searcher that evaluation for config has started, but not yet finished, which allows model-based searchers to register this evaluation as pending. For multi-fidelity schedulers, milestone is the next milestone the evaluation will attend, so that model registers (config, milestone) as pending. In general, the searcher may assume that update is called with that config at a later time.

remove_case(self, config, **kwargs)

Remove data case previously appended by update

For searchers which maintain the dataset of all cases (reports) passed to update, this method allows to remove one case from the dataset.

update(self, config, **kwargs)

Update the searcher with the newest metric report

kwargs must include the reward (key == reward_attribute). For multi-fidelity schedulers (e.g., Hyperband), intermediate results are also reported. In this case, kwargs must also include the resource (key == resource_attribute). We can also assume that if register_pending(config, …) is received, then later on, the searcher receives update(config, …) with milestone as resource.

Note that for Hyperband scheduling, update is also called for intermediate results. _results is updated in any case, if the new reward value is larger than the previously recorded one. This implies that the best value for a config (in _results) could be obtained for an intermediate resource, not the final one (virtue of early stopping). Full details can be reconstruction from training_history of the scheduler.