plot_trials
method plot_trials(models=None, metric=None, title=None, legend="upper left", figsize=(900, 800), filename=None, display=True)[source]
Plot the hyperparameter tuning trials.
Creates a figure with two plots: the first plot shows the score
of every trial and the second shows the distance between the
last consecutive steps. The best trial is indicated with a star.
This is the same plot as produced by ht_params={"plot": True}
.
This plot is only available for models that ran
hyperparameter tuning.
Parameters |
models: int, str, Model, segment, sequence or None, default=None
Models to plot. If None, all models that used hyperparameter
tuning are selected.
metric: int, str, sequence or None, default=None
Metric to plot (only for multi-metric runs). Add
title: str, dict or None, default=None+ between
options to select more than one. If None, all metrics are
selected.
Title for the plot.
legend: str, dict or None, default="upper left"
Legend for the plot. See the user guide for
an extended description of the choices.
figsize: tuple, default=(900, 800)
Figure's size in pixels, format as (x, y).
filename: str, Path or None, default=None
Save the plot using this name. Use "auto" for automatic
naming. The type of the file depends on the provided name
(.html, .png, .pdf, etc...). If
display: bool or None, default=Truefilename has no file type,
the plot is saved as html. If None, the plot is not saved.
Whether to render the plot. If None, it returns the figure.
|
Returns | {#plot_trials-go.Figure or None}
go.Figure or None
Plot object. Only returned if display=None .
|
See Also
Plot evaluation curves.
Plot hyperparameter relationships in a study.
Compare metric results of the models.
Example
>>> from atom import ATOMClassifier
>>> from sklearn.datasets import make_classification
>>> X, y = make_classification(n_samples=100, flip_y=0.2, random_state=1)
>>> atom = ATOMClassifier(X, y, random_state=1)
>>> atom.run(["ET", "RF"], n_trials=15)
>>> atom.plot_trials()