Predicting Columns in a Table - Quick Start

Via a simple fit() call, AutoGluon can produce highly-accurate models to predict the values in one column of a data table based on the rest of the columns’ values. Use AutoGluon with tabular data for both classification and regression problems. This tutorial demonstrates how to use AutoGluon to produce a classification model that predicts whether or not a person’s income exceeds $50,000.

To start, import autogluon and TabularPrediction module as your task:

import autogluon as ag
from autogluon import TabularPrediction as task

Load training data from a CSV file into an AutoGluon Dataset object. This object is essentially equivalent to a Pandas DataFrame and the same methods can be applied to both.

train_data = task.Dataset(file_path='https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')
train_data = train_data.head(500) # subsample 500 data points for faster demo
print(train_data.head())
Loaded data from: https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv | Columns = 15 / 15 | Rows = 39073 -> 39073
   age   workclass  fnlwgt   education  education-num       marital-status  0   25     Private  178478   Bachelors             13        Never-married
1   23   State-gov   61743     5th-6th              3        Never-married
2   46     Private  376789     HS-grad              9        Never-married
3   55           ?  200235     HS-grad              9   Married-civ-spouse
4   36     Private  224541     7th-8th              4   Married-civ-spouse

           occupation    relationship    race      sex  capital-gain  0        Tech-support       Own-child   White   Female             0
1    Transport-moving   Not-in-family   White     Male             0
2       Other-service   Not-in-family   White     Male             0
3                   ?         Husband   White     Male             0
4   Handlers-cleaners         Husband   White     Male             0

   capital-loss  hours-per-week  native-country   class
0             0              40   United-States   <=50K
1             0              35   United-States   <=50K
2             0              15   United-States   <=50K
3             0              50   United-States    >50K
4             0              40     El-Salvador   <=50K

Note that we loaded data from a CSV file stored in the cloud (AWS s3 bucket), but you can you specify a local file-path instead if you have already downloaded the CSV file to your own machine (e.g., using wget). Each row in the table train_data corresponds to a single training example. In this particular dataset, each row corresponds to an individual person, and the columns contain various characteristics reported during a census.

Let’s first use these features to predict whether the person’s income exceeds $50,000 or not, which is recorded in the class column of this table.

label_column = 'class'
print("Summary of class variable: \n", train_data[label_column].describe())
Summary of class variable:
 count        500
unique         2
top        <=50K
freq         394
Name: class, dtype: object

Now use AutoGluon to train multiple models:

dir = 'agModels-predictClass' # specifies folder where to store trained models
predictor = task.fit(train_data=train_data, label=label_column, output_directory=dir)
Beginning AutoGluon training ...
AutoGluon will save models to agModels-predictClass/
AutoGluon Version:  0.0.13b20200806
Train Data Rows:    500
Train Data Columns: 15
Preprocessing data ...
Here are the 2 unique label values in your data:  [' <=50K', ' >50K']
AutoGluon infers your prediction problem is: binary  (because only two unique label-values observed).
If this is wrong, please specify problem_type argument in fit() instead (You may specify problem_type as one of: ['binary', 'multiclass', 'regression'])

Selected class <--> label mapping:  class 1 =  >50K, class 0 =  <=50K
Train Data Class Count: 2
Feature Generator processed 500 data points with 14 features
Original Features (raw dtypes):
    int64 features: 6
    object features: 8
Original Features (inferred dtypes):
    int features: 6
    object features: 8
Generated Features (special dtypes):
Processed Features (raw dtypes):
    int features: 6
    category features: 8
Processed Features:
    int features: 6
    category features: 8
    Data preprocessing and feature engineering runtime = 0.05s ...
AutoGluon will gauge predictive performance using evaluation metric: accuracy
To change this, specify the eval_metric argument of fit()
AutoGluon will early stop models using evaluation metric: accuracy
Fitting model: RandomForestClassifierGini ...
    0.82     = Validation accuracy score
    0.5s     = Training runtime
    0.11s    = Validation runtime
Fitting model: RandomForestClassifierEntr ...
    0.84     = Validation accuracy score
    0.5s     = Training runtime
    0.11s    = Validation runtime
Fitting model: ExtraTreesClassifierGini ...
    0.83     = Validation accuracy score
    0.4s     = Training runtime
    0.11s    = Validation runtime
Fitting model: ExtraTreesClassifierEntr ...
    0.82     = Validation accuracy score
    0.4s     = Training runtime
    0.11s    = Validation runtime
Fitting model: KNeighborsClassifierUnif ...
    0.8      = Validation accuracy score
    0.0s     = Training runtime
    0.1s     = Validation runtime
Fitting model: KNeighborsClassifierDist ...
    0.75     = Validation accuracy score
    0.0s     = Training runtime
    0.1s     = Validation runtime
Fitting model: LightGBMClassifier ...
    0.86     = Validation accuracy score
    0.14s    = Training runtime
    0.01s    = Validation runtime
Fitting model: CatboostClassifier ...
    0.85     = Validation accuracy score
    0.44s    = Training runtime
    0.01s    = Validation runtime
Fitting model: NeuralNetClassifier ...
    0.86     = Validation accuracy score
    4.1s     = Training runtime
    0.02s    = Validation runtime
Fitting model: LightGBMClassifierCustom ...
    0.84     = Validation accuracy score
    0.39s    = Training runtime
    0.01s    = Validation runtime
Fitting model: weighted_ensemble_k0_l1 ...
    0.88     = Validation accuracy score
    0.33s    = Training runtime
    0.0s     = Validation runtime
AutoGluon training complete, total runtime = 8.83s ...

Next, load separate test data to demonstrate how to make predictions on new examples at inference time:

test_data = task.Dataset(file_path='https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv')
y_test = test_data[label_column]  # values to predict
test_data_nolab = test_data.drop(labels=[label_column],axis=1) # delete label column to prove we're not cheating
print(test_data_nolab.head())
Loaded data from: https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv | Columns = 15 / 15 | Rows = 9769 -> 9769
   age          workclass  fnlwgt      education  education-num  0   31            Private  169085           11th              7
1   17   Self-emp-not-inc  226203           12th              8
2   47            Private   54260      Assoc-voc             11
3   21            Private  176262   Some-college             10
4   17            Private  241185           12th              8

        marital-status        occupation relationship    race      sex  0   Married-civ-spouse             Sales         Wife   White   Female
1        Never-married             Sales    Own-child   White     Male
2   Married-civ-spouse   Exec-managerial      Husband   White     Male
3        Never-married   Exec-managerial    Own-child   White   Female
4        Never-married    Prof-specialty    Own-child   White     Male

   capital-gain  capital-loss  hours-per-week  native-country
0             0             0              20   United-States
1             0             0              45   United-States
2             0          1887              60   United-States
3             0             0              30   United-States
4             0             0              20   United-States

We use our trained models to make predictions on the new data and then evaluate performance:

predictor = task.load(dir) # unnecessary, just demonstrates how to load previously-trained predictor from file

y_pred = predictor.predict(test_data_nolab)
print("Predictions:  ", y_pred)
perf = predictor.evaluate_predictions(y_true=y_test, y_pred=y_pred, auxiliary_metrics=True)
Evaluation: accuracy on test data: 0.8129798341693111
Evaluations on test data:
{
    "accuracy": 0.8129798341693111,
    "accuracy_score": 0.8129798341693111,
    "balanced_accuracy_score": 0.6287943757715783,
    "matthews_corrcoef": 0.39987226304674783,
    "f1_score": 0.8129798341693111
}
Predictions:   [' <=50K' ' <=50K' ' <=50K' ... ' <=50K' ' <=50K' ' <=50K']
Detailed (per-class) classification report:
{
    " <=50K": {
        "precision": 0.8134894091415831,
        "recall": 0.979331633337807,
        "f1-score": 0.8887400280129102,
        "support": 7451
    },
    " >50K": {
        "precision": 0.8072590738423029,
        "recall": 0.27825711820534943,
        "f1-score": 0.4138594802694899,
        "support": 2318
    },
    "accuracy": 0.8129798341693111,
    "macro avg": {
        "precision": 0.810374241491943,
        "recall": 0.6287943757715783,
        "f1-score": 0.6512997541412,
        "support": 9769
    },
    "weighted avg": {
        "precision": 0.8120110677326638,
        "recall": 0.8129798341693111,
        "f1-score": 0.7760598038682436,
        "support": 9769
    }
}

Now you’re ready to try AutoGluon on your own tabular datasets! As long as they’re stored in a popular format like CSV, you should be able to achieve strong predictive performance with just 2 lines of code:

from autogluon import TabularPrediction as task
predictor = task.fit(train_data=task.Dataset(file_path=<file-name>), label_column=<variable-name>)

Description of fit():

Here we discuss what happened during fit().

Since there are only two possible values of the class variable, this was a binary classification problem, for which an appropriate performance metric is accuracy. AutoGluon automatically infers this as well as the type of each feature (i.e., which columns contain continuous numbers vs. discrete categories). AutogGluon can also automatically handle common issues like missing data and rescaling feature values.

We did not specify separate validation data and so AutoGluon automatically choses a random training/validation split of the data. The data used for validation is seperated from the training data and is used to determine the models and hyperparameter-values that produce the best results. Rather than just a single model, AutoGluon trains multiple models and ensembles them together to ensure superior predictive performance.

By default, AutoGluon tries to fit various types of models including neural networks and tree ensembles. Each type of model has various hyperparameters, which traditionally, the user would have to specify. AutoGluon automates this process.

AutoGluon automatically and iteratively tests values for hyperparameters to produce the best performance on the validation data. This involves repeatedly training models under different hyperparameter settings and evaluating their performance. This process can be computationally-intensive, so fit() can parallelize this process across multiple threads (and machines if distributed resources are available). To control runtimes, you can specify various arguments in fit() as demonstrated in the subsequent In-Depth tutorial.

For tabular problems, fit() returns a Predictor object. Besides inference, this object can also be used to view a summary of what happened during fit.

results = predictor.fit_summary()
* Summary of fit() *
Estimated performance of each model:
                         model  score_val  pred_time_val  fit_time  pred_time_val_marginal  fit_time_marginal  stack_level  can_infer
0      weighted_ensemble_k0_l1       0.88       0.260958  5.808107                0.000873           0.330368            1       True
1           LightGBMClassifier       0.86       0.009266  0.141078                0.009266           0.141078            0       True
2          NeuralNetClassifier       0.86       0.022229  4.098127                0.022229           4.098127            0       True
3           CatboostClassifier       0.85       0.008589  0.443631                0.008589           0.443631            0       True
4     LightGBMClassifierCustom       0.84       0.009976  0.391665                0.009976           0.391665            0       True
5   RandomForestClassifierEntr       0.84       0.110388  0.504760                0.110388           0.504760            0       True
6     ExtraTreesClassifierGini       0.83       0.109986  0.396906                0.109986           0.396906            0       True
7     ExtraTreesClassifierEntr       0.82       0.110014  0.397997                0.110014           0.397997            0       True
8   RandomForestClassifierGini       0.82       0.110551  0.503670                0.110551           0.503670            0       True
9     KNeighborsClassifierUnif       0.80       0.102873  0.001999                0.102873           0.001999            0       True
10    KNeighborsClassifierDist       0.75       0.102978  0.001720                0.102978           0.001720            0       True
Number of models trained: 11
Types of models trained:
{'RFModel', 'KNNModel', 'CatboostModel', 'WeightedEnsembleModel', 'TabularNeuralNetModel', 'XTModel', 'LGBModel'}
Bagging used: False
Stack-ensembling used: False
Hyperparameter-tuning used: False
User-specified hyperparameters:
{'default': {'NN': [{}], 'GBM': [{}], 'CAT': [{}], 'RF': [{'criterion': 'gini', 'AG_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'AG_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}], 'XT': [{'criterion': 'gini', 'AG_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'AG_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}], 'KNN': [{'weights': 'uniform', 'AG_args': {'name_suffix': 'Unif'}}, {'weights': 'distance', 'AG_args': {'name_suffix': 'Dist'}}], 'custom': [{'num_boost_round': 10000, 'num_threads': -1, 'objective': 'binary', 'verbose': -1, 'boosting_type': 'gbdt', 'learning_rate': 0.03, 'num_leaves': 128, 'feature_fraction': 0.9, 'min_data_in_leaf': 5, 'two_round': True, 'seed_value': 0, 'AG_args': {'model_type': 'GBM', 'name_suffix': 'Custom', 'disable_in_hpo': True}}]}}
Plot summary of models saved to file: agModels-predictClass/SummaryOfModels.html
* End of fit() summary *

From this summary, we can see that AutoGluon trained many different types of models as well as an ensemble of the best-performing models. The summary also describes the actual models that were trained during fit and how well each model performed on the held-out validation data. We can also view what properties AutoGluon automatically inferred about our prediction task:

print("AutoGluon infers problem type is: ", predictor.problem_type)
print("AutoGluon categorized the features as: ", predictor.feature_types)
AutoGluon infers problem type is:  binary
AutoGluon categorized the features as:  <autogluon.utils.tabular.features.feature_types_metadata.FeatureTypesMetadata object at 0x7f1b7a353990>

AutoGluon correctly recognized our prediction problem to be a binary classification task and decided that variables such as age should be represented as integers, whereas variables such as workclass should be represented as categorical objects.

Regression (predicting numeric table columns):

To demonstrate that fit() can also automatically handle regression tasks, we now try to predict the numeric age variable in the same table based on the other features:

age_column = 'age'
print("Summary of age variable: \n", train_data[age_column].describe())
Summary of age variable:
 count    500.00000
mean      38.31400
std       13.85436
min       17.00000
25%       27.00000
50%       37.00000
75%       47.00000
max       90.00000
Name: age, dtype: float64

We again call fit(), imposing a time-limit this time (in seconds), and also demonstrate a shorthand method to evaluate the resulting model on the test data (which contain labels):

predictor_age = task.fit(train_data=train_data, output_directory="agModels-predictAge", label=age_column, time_limits=60)
performance = predictor_age.evaluate(test_data)
Beginning AutoGluon training ... Time limit = 60s
AutoGluon will save models to agModels-predictAge/
AutoGluon Version:  0.0.13b20200806
Train Data Rows:    500
Train Data Columns: 15
Preprocessing data ...
Here are the first 10 unique label values in your data:  [25, 23, 46, 55, 36, 51, 33, 18, 43, 41]
AutoGluon infers your prediction problem is: regression  (because dtype of label-column == int and many unique label-values observed).
If this is wrong, please specify problem_type argument in fit() instead (You may specify problem_type as one of: ['binary', 'multiclass', 'regression'])

Feature Generator processed 500 data points with 14 features
Original Features (raw dtypes):
    object features: 9
    int64 features: 5
Original Features (inferred dtypes):
    object features: 9
    int features: 5
Generated Features (special dtypes):
Processed Features (raw dtypes):
    int features: 5
    category features: 9
Processed Features:
    int features: 5
    category features: 9
    Data preprocessing and feature engineering runtime = 0.05s ...
AutoGluon will gauge predictive performance using evaluation metric: root_mean_squared_error
To change this, specify the eval_metric argument of fit()
AutoGluon will early stop models using evaluation metric: root_mean_squared_error
Fitting model: RandomForestRegressorMSE ... Training model for up to 59.95s of the 59.95s of remaining time.
    -11.0011         = Validation root_mean_squared_error score
    0.5s     = Training runtime
    0.11s    = Validation runtime
Fitting model: ExtraTreesRegressorMSE ... Training model for up to 59.33s of the 59.33s of remaining time.
    -11.3388         = Validation root_mean_squared_error score
    0.39s    = Training runtime
    0.11s    = Validation runtime
Fitting model: KNeighborsRegressorUnif ... Training model for up to 58.81s of the 58.81s of remaining time.
    -14.5706         = Validation root_mean_squared_error score
    0.0s     = Training runtime
    0.1s     = Validation runtime
Fitting model: KNeighborsRegressorDist ... Training model for up to 58.7s of the 58.7s of remaining time.
    -15.8074         = Validation root_mean_squared_error score
    0.0s     = Training runtime
    0.1s     = Validation runtime
Fitting model: LightGBMRegressor ... Training model for up to 58.59s of the 58.59s of remaining time.
    -10.9958         = Validation root_mean_squared_error score
    0.15s    = Training runtime
    0.01s    = Validation runtime
Fitting model: CatboostRegressor ... Training model for up to 58.43s of the 58.43s of remaining time.
    -10.0961         = Validation root_mean_squared_error score
    0.33s    = Training runtime
    0.01s    = Validation runtime
Fitting model: NeuralNetRegressor ... Training model for up to 58.09s of the 58.09s of remaining time.
    -12.3444         = Validation root_mean_squared_error score
    2.87s    = Training runtime
    0.02s    = Validation runtime
Fitting model: LightGBMRegressorCustom ... Training model for up to 55.19s of the 55.19s of remaining time.
    -11.3321         = Validation root_mean_squared_error score
    0.26s    = Training runtime
    0.01s    = Validation runtime
Fitting model: weighted_ensemble_k0_l1 ... Training model for up to 59.95s of the 54.37s of remaining time.
    -10.0633         = Validation root_mean_squared_error score
    0.38s    = Training runtime
    0.0s     = Validation runtime
AutoGluon training complete, total runtime = 6.02s ...
Predictive performance on given dataset: root_mean_squared_error = 10.874239367013018

Note that we didn’t need to tell AutoGluon this is a regression problem, it automatically inferred this from the data and reported the appropriate performance metric (RMSE by default). To specify a particular evaluation metric other than the default, set the eval_metric argument of fit() and AutoGluon will tailor its models to optimize your metric (e.g. eval_metric = 'mean_absolute_error'). For evaluation metrics where higher values are worse (like RMSE), AutoGluon may sometimes flips their sign and print them as negative values during training (as it internally assumes higher values are better).