plot_results
method plot_results(models=None, metric=None, rows="test", title=None, legend="lower right", figsize=None, filename=None, display=True)[source]
Compare metric results of the models.
Shows a barplot of the metric scores. Models are ordered based on their score from the top down.
Parameters |
models: int, str, Model, segment, sequence or None, default=None
Models to plot. If None, all models are selected.
metric: int, str, sequence or None, default=None
Metric to plot. Choose from any of sklearn's scorers, a
function with signature
rows: hashable, segment, sequence or dataframe, default="test"metric(y_true, y_pred, **kwargs)
or a scorer object. Use a sequence or add + between
options to select more than one. If None, the metric used
to run the pipeline is selected. Other available options
are: "time_bo", "time_fit", "time_bootstrap", "time".
Selection of rows on which to
calculate the metric. This parameter is ignored if
title: str, dict or None, default=Nonemetric
is a time metric.
Title for the plot.
legend: str, dict or None, default="lower right"
Legend for the plot. See the user guide for
an extended description of the choices.
figsize: tuple or None, default=None
Figure's size in pixels, format as (x, y). If None, it
adapts the size to the number of models.
filename: str, Path or None, default=None
Save the plot using this name. Use "auto" for automatic
naming. The type of the file depends on the provided name
(.html, .png, .pdf, etc...). If
display: bool or None, default=Truefilename has no file type,
the plot is saved as html. If None, the plot is not saved.
Whether to render the plot. If None, it returns the figure.
|
Returns | {#plot_results-go.Figure or None}
go.Figure or None
Plot object. Only returned if display=None .
|
See Also
Plot the bootstrapping scores.
Plot the probability distribution of the target classes.
Plot metric performances against threshold values.
Example
>>> from atom import ATOMClassifier
>>> from sklearn.datasets import make_classification
>>> X, y = make_classification(n_samples=1000, flip_y=0.2, random_state=1)
>>> atom = ATOMClassifier(X, y, random_state=1)
>>> atom.run(["GNB", "LR"], metric=["f1", "recall"])
>>> atom.plot_results()
>>> # Plot the time it took to fit the models
>>> atom.plot_results(metric="time_fit+time")
>>> # Plot a different metric
>>> atom.plot_results(metric="accuracy")
>>> # Plot the results on the training set
>>> atom.plot_results(metric="f1", rows="train")