Skip to content

plot_evals


method plot_evals(models=None, dataset="both", title=None, figsize=(10, 6), filename=None, display=True) [source]

Plot evaluation curves for the train and test set. Only for models that allow in-training evaluation (XGB, LGB, CatB). The metric is provided by the estimator's package and is different for every model and every task. For this reason, the method only allows plotting one model.

Parameters:

models: str, sequence or None, optional (default=None)
Name of the model to plot. If None, all models in the pipeline are selected. Note that leaving the default option could raise an exception if there are multiple models in the pipeline. To avoid this, call the plot from a model, e.g. atom.lgb.plot_evals().

dataset: str, optional (default="both")
Data set on which to calculate the evaluation curves. Options are "train", "test" or "both".

title: str or None, optional (default=None)
Plot's title. If None, the title is left empty.

figsize: tuple, optional (default=(10, 6))
Figure's size, format as (x, y).

filename: str or None, optional (default=None)
Name of the file. Use "auto" for automatic naming. If None, the figure is not saved.

display: bool or None, optional (default=True)
Whether to render the plot. If None, it returns the matplotlib figure.

Returns: matplotlib.figure.Figure
Plot object. Only returned if display=None.


Example

from atom import ATOMRegressor

atom = ATOMRegressor(X, y)
atom.run(["Bag", "LGB"])
atom.lgb.plot_evals()
plot_evals
Back to top