Skip to content

plot_evals


method plot_evals(models=None, dataset="test", title=None, legend="lower right", figsize=(900, 600), filename=None, display=True)[source]
Plot evaluation curves.

The evaluation curves are the main metric scores achieved by the models at every iteration of the training process. This plot is available only for models that allow in-training validation.

Parametersmodels: int, str, Model, slice, sequence or None, default=None
Models to plot. If None, all models are selected.

dataset: str or sequence, default="test"
Data set on which to calculate the evaluation curves. Use a sequence or add + between options to select more than one. Choose from: "train" or "test".

title: str, dict or None, default=None
Title for the plot.

legend: str, dict or None, default="lower right"
Legend for the plot. See the user guide for an extended description of the choices.

  • If None: No legend is shown.
  • If str: Location where to show the legend.
  • If dict: Legend configuration.

figsize: tuple, default=(900, 600)
Figure's size in pixels, format as (x, y).

filename: str or None, default=None
Save the plot using this name. Use "auto" for automatic naming. The type of the file depends on the provided name (.html, .png, .pdf, etc...). If filename has no file type, the plot is saved as html. If None, the plot is not saved.

display: bool or None, default=True
Whether to render the plot. If None, it returns the figure.

Returnsgo.Figure or None
Plot object. Only returned if display=None.


See Also

plot_trials

Plot the hyperparameter tuning trials.


Example

>>> from atom import ATOMClassifier

>>> X = pd.read_csv("./examples/datasets/weatherAUS.csv")

>>> atom = ATOMClassifier(X, y="RainTomorrow", n_rows=1e4)
>>> atom.impute()
>>> atom.encode()
>>> atom.run(["XGB", "LGB"])
>>> atom.plot_evals()