DirectClassifier
The following steps are applied to every model:
- Apply hyperparameter tuning (optional).
- Fit the model on the training set using the best combination of hyperparameters found.
- Evaluate the model on the test set.
- Train the estimator on various bootstrapped samples of the training set and evaluate again on the test set (optional).
Parameters | models: str, estimator or sequence, default=None
Models to fit to the data. Allowed inputs are: an acronym from
any of the predefined models, an ATOMModel or a custom
predictor as class or instance. If None, all the predefined
models are used.
metric: str, func, scorer, sequence or None, default=None
Metric on which to fit the models. Choose from any of sklearn's
scorers, a function with signature n_trials: int or sequence, default=0function(y_true, y_pred)
-> score , a scorer object or a sequence of these. If None, a
default metric is selected for every task:
Maximum number of iterations for the hyperparameter tuning.
If 0, skip the tuning and fit the model on its default
parameters. If sequence, the n-th value applies to the n-th
model.
est_params: dict or None, default=None
Additional parameters for the models. See their corresponding
documentation for the available options. For multiple models,
use the acronyms as key (or 'all' for all models) and a dict
of the parameters as value. Add ht_params: dict or None, default=None_fit to the parameter's name
to pass it to the estimator's fit method instead of the
constructor.
Additional parameters for the hyperparameter tuning. If None,
it uses the same parameters as the first run. Can include:
n_bootstrap: int or sequence, default=0
Number of data sets to use for bootstrapping. If 0, no
bootstrapping is performed. If sequence, the n-th value applies
to the n-th model.
n_jobs: int, default=1
Number of cores to use for parallel processing.
device: str, default="cpu"
Device on which to train the estimators. Use any string
that follows the SYCL_DEVICE_FILTER filter selector,
e.g. engine: str, default="sklearn"device="gpu" to use the GPU. Read more in the
user guide.
Execution engine to use for the estimators. Refer to the
user guide for an explanation
regarding every choice. Choose from:
verbose: int, default=0
Verbosity level of the class. Choose from:
warnings: bool or str, default=False
Changing this parameter affects the
Name of the mlflow experiment to use for tracking.
If None, no mlflow tracking is performed.
random_state: int or None, default=None
Seed used by the random number generator. If None, the random
number generator is the RandomState used by np.random .
|
See Also
Example
>>> from atom.training import DirectClassifier
>>> from sklearn.datasets import load_breast_cancer
>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)
>>> train, test = train_test_split(
... X.merge(y.to_frame(), left_index=True, right_index=True),
... test_size=0.3,
... )
>>> runner = DirectClassifier(models=["LR", "RF"], metric="auc", verbose=2)
>>> runner.run(train, test)
Training ========================= >>
Models: LR, RF
Metric: roc_auc
Results for LogisticRegression:
Fit ---------------------------------------------
Train evaluation --> roc_auc: 0.9925
Test evaluation --> roc_auc: 0.9871
Time elapsed: 0.035s
-------------------------------------------------
Total time: 0.035s
Results for RandomForest:
Fit ---------------------------------------------
Train evaluation --> roc_auc: 1.0
Test evaluation --> roc_auc: 0.9807
Time elapsed: 0.137s
-------------------------------------------------
Total time: 0.137s
Final results ==================== >>
Total time: 0.173s
-------------------------------------
LogisticRegression --> roc_auc: 0.9871 !
RandomForest --> roc_auc: 0.9807
>>> # Analyze the results
>>> runner.evaluate()
accuracy average_precision ... precision recall roc_auc
LR 0.9357 0.9923 ... 0.9533 0.9444 0.9325
RF 0.9532 0.9810 ... 0.9464 0.9815 0.9431
[2 rows x 9 columns]
Attributes
Data attributes
The data attributes are used to access the dataset and its properties. Updating the dataset will automatically update the response of these attributes accordingly.
Utility attributes
The utility attributes are used to access information about the models in the instance after training.
Attributes | models: str or list Name of the model(s). metric: str or listName of the metric(s). errors: dictErrors encountered during model training.
winners: listThe key is the model's name and the value is the exception
object that was raised. Use the Models ordered by performance.
winner: modelPerformance is measured as the highest score on the model's
Best performing model.
results: pd.DataFramePerformance is measured as the highest score on the model's
Overview of the training results.
All durations are in seconds. Columns include:
|
Tracking attributes
The tracking attributes are used to customize what elements of the experiment are tracked. Read more in the user guide.
Plot attributes
The plot attributes are used to customize the plot's aesthetics. Read more in the user guide.
Attributes | palette: str or sequence Color palette.
title_fontsize: intSpecify one of plotly's built-in palettes or create
a custom one, e.g. Fontsize for the plot's title. label_fontsize: intFontsize for the labels, legend and hover information. tick_fontsize: intFontsize for the ticks along the plot's axes. |
Methods
Next to the plotting methods, the class contains a variety of methods to handle the data, run the training, and manage the pipeline.
available_models | Give an overview of the available predefined models. |
canvas | Create a figure with multiple plots. |
clear | Clear attributes from all models. |
delete | Delete models. |
evaluate | Get all models' scores for the provided metrics. |
export_pipeline | Export the pipeline to a sklearn-like object. |
get_class_weight | Return class weights for a balanced dataset. |
get_params | Get parameters for this estimator. |
log | Print message and save to log file. |
merge | Merge another instance of the same class into this one. |
update_layout | Update the properties of the plot's layout. |
reset_aesthetics | Reset the plot aesthetics to their default values. |
run | Train and evaluate the models. |
save | Save the instance to a pickle file. |
set_params | Set the parameters of this estimator. |
stacking | Add a Stacking model to the pipeline. |
voting | Add a Voting model to the pipeline. |
Returns | pd.DataFrame
Information about the available predefined models. Columns
include:
|
This @contextmanager
allows you to draw many plots in one
figure. The default option is to add two plots side by side.
See the user guide for an example.
Parameters | rows: int, default=1
Number of plots in length.
cols: int, default=2
Number of plots in width.
horizontal_spacing: float, default=0.05
Space between subplot rows in normalized plot coordinates.
The spacing is relative to the figure's size.
vertical_spacing: float, default=0.07
Space between subplot cols in normalized plot coordinates.
The spacing is relative to the figure's size.
title: str, dict or None, default=None
Title for the plot.
legend: bool, str or dict, default="out"
Legend for the plot. See the user guide for
an extended description of the choices.
figsize: tuple or None, default=None
Figure's size in pixels, format as (x, y). If None, it
adapts the size to the number of plots in the canvas.
filename: str or None, default=None
Save the plot using this name. Use "auto" for automatic
naming. The type of the file depends on the provided name
(.html, .png, .pdf, etc...). If display: bool, default=Truefilename has no file type,
the plot is saved as html. If None, the plot is not saved.
Whether to render the plot.
|
Yields | go.Figure
Plot object.
|
Reset all model attributes to their initial state, deleting potentially large data arrays. Use this method to free some memory before saving the instance. The cleared attributes per model are:
If all models are removed, the metric is reset. Use this method to drop unwanted models from the pipeline or to free some memory before saving. Deleted models are not removed from any active mlflow experiment.
Parameters | models: int, str, slice, Model, sequence or None, default=None
Models to delete. If None, all models are deleted.
|
Optionally, you can add a model as final estimator. The returned pipeline is already fitted on the training set.
Info
The returned pipeline behaves similarly to sklearn's Pipeline, and additionally:
- Accepts transformers that change the target column.
- Accepts transformers that drop rows.
- Accepts transformers that only are fitted on a subset of the provided dataset.
- Always returns pandas objects.
- Uses transformers that are only applied on the training set to fit the pipeline, not to make predictions.
Parameters | model: str, Model or None, default=None
Model for which to export the pipeline. If the model used
automated feature scaling, the Scaler is added to
the pipeline. If None, the pipeline in the current branch
is exported.
memory: bool, str, Memory or None, default=None
Used to cache the fitted transformers of the pipeline.
- If None or False: No caching is performed.
- If True: A default temp directory is used.
- If str: Path to the caching directory.
- If Memory: Object with the joblib.Memory interface.
verbose: int or None, default=None
Verbosity level of the transformers in the pipeline. If
None, it leaves them to their original verbosity. Note
that this is not the pipeline's own verbose parameter.
To change that, use the set_params method.
|
Returns | Pipeline
Sklearn-like Pipeline object with all transformers in the
current branch.
|
Statistically, the class weights re-balance the data set so that the sampled data set represents the target population as closely as possible. The returned weights are inversely proportional to the class frequencies in the selected data set.
Parameters | dataset: str, default="train"
Data set from which to get the weights. Choose from:
"train", "test" or "dataset".
|
Returns | dict
Classes with the corresponding weights.
|
Parameters | deep : bool, default=True
If True, will return the parameters for this estimator and
contained subobjects that are estimators.
|
Returns | params : dict
Parameter names mapped to their values.
|
Branches, models, metrics and attributes of the other instance
are merged into this one. If there are branches and/or models
with the same name, they are merged adding the suffix
parameter to their name. The errors and missing attributes are
extended with those of the other instance. It's only possible
to merge two instances if they are initialized with the same
dataset and trained with the same metric.
Parameters | other: Runner
Instance with which to merge. Should be of the same class
as self.
suffix: str, default="2"
Conflicting branches and models are merged adding suffix
to the end of their names.
|
This recursively updates the structure of the original layout with the values in the input dict / keyword arguments.
Read more in the user guide.
Parameters | *arrays: sequence of indexables
Training set and test set. Allowed formats are:
|
Parameters | filename: str, default="auto"
Name of the file. Use "auto" for automatic naming.
save_data: bool, default=True
Whether to save the dataset with the instance. This
parameter is ignored if the method is not called from
atom. If False, remember to add the data to ATOMLoader
when loading the file.
|
Parameters | **params : dict
Estimator parameters.
|
Returns | self : estimator instance
Estimator instance.
|