plot_evals
method plot_evals(models=None, dataset="test", title=None, legend="lower right", figsize=(900, 600), filename=None, display=True)[source]
Plot evaluation curves.
The evaluation curves are the main metric scores achieved by the models at every iteration of the training process. This plot is available only for models that allow in-training validation.
Parameters |
models: int, str, Model, segment, sequence or None, default=None
Models to plot. If None, all models are selected.
dataset: str, default="test"
Data set for which to plot the evaluation curves. Use
title: str, dict or None, default=None+
between options to select more than one. Choose from: "train",
"test".
Title for the plot.
legend: str, dict or None, default="lower right"
Legend for the plot. See the user guide for
an extended description of the choices.
figsize: tuple, default=(900, 600)
Figure's size in pixels, format as (x, y).
filename: str, Path or None, default=None
Save the plot using this name. Use "auto" for automatic
naming. The type of the file depends on the provided name
(.html, .png, .pdf, etc...). If
display: bool or None, default=Truefilename has no file type,
the plot is saved as html. If None, the plot is not saved.
Whether to render the plot. If None, it returns the figure.
|
Returns | {#plot_evals-go.Figure or None}
go.Figure or None
Plot object. Only returned if display=None .
|
Example
>>> from atom import ATOMClassifier
>>> from sklearn.datasets import make_classification
>>> X, y = make_classification(n_samples=1000, flip_y=0.2, random_state=1)
>>> atom = ATOMClassifier(X, y, random_state=1)
>>> atom.run(["XGB", "LGB"])
>>> atom.plot_evals()