Skip to content

plot_results


method plot_results(models=None, metric=None, rows="test", title=None, legend="lower right", figsize=None, filename=None, display=True)[source]
Compare metric results of the models.

Shows a barplot of the metric scores. Models are ordered based on their score from the top down.

Parametersmodels: int, str, Model, segment, sequence or None, default=None
Models to plot. If None, all models are selected.

metric: int, str, sequence or None, default=None
Metric to plot. Choose from any of sklearn's scorers, a function with signature metric(y_true, y_pred, **kwargs) or a scorer object. Use a sequence or add + between options to select more than one. If None, the metric used to run the pipeline is selected. Other available options are: "time_bo", "time_fit", "time_bootstrap", "time".

rows: hashable, segment, sequence or dataframe, default="test"
Selection of rows on which to calculate the metric. This parameter is ignored if metric is a time metric.

title: str, dict or None, default=None
Title for the plot.

legend: str, dict or None, default="lower right"
Legend for the plot. See the user guide for an extended description of the choices.

  • If None: No legend is shown.
  • If str: Position to display the legend.
  • If dict: Legend configuration.

figsize: tuple or None, default=None
Figure's size in pixels, format as (x, y). If None, it adapts the size to the number of models.

filename: str, Path or None, default=None
Save the plot using this name. Use "auto" for automatic naming. The type of the file depends on the provided name (.html, .png, .pdf, etc...). If filename has no file type, the plot is saved as html. If None, the plot is not saved.

display: bool or None, default=True
Whether to render the plot. If None, it returns the figure.

Returnsgo.Figure or None
Plot object. Only returned if display=None.


See Also

plot_bootstrap

Plot the bootstrapping scores.

plot_probabilities

Plot the probability distribution of the target classes.

plot_threshold

Plot metric performances against threshold values.


Example

>>> from atom import ATOMClassifier
>>> from sklearn.datasets import make_classification

>>> X, y = make_classification(n_samples=1000, flip_y=0.2, random_state=1)

>>> atom = ATOMClassifier(X, y, random_state=1)
>>> atom.run(["GNB", "LR"], metric=["f1", "recall"])
>>> atom.plot_results()

>>> # Plot the time it took to fit the models
>>> atom.plot_results(metric="time_fit+time")

>>> # Plot a different metric
>>> atom.plot_results(metric="accuracy")

>>> # Plot the results on the training set
>>> atom.plot_results(metric="f1", rows="train")