Skip to content

plot_learning_curve


method plot_learning_curve(models=None, metric=None, title=None, legend="lower right", figsize=(900, 600), filename=None, display=True)[source]
Plot the learning curve: score vs number of training samples.

This plot is available only for models fitted using train sizing. Ensembles are ignored.

Parametersmodels: int, str, Model, segment, sequence or None, default=None
Models to plot. If None, all models are selected.

metric: int, str, sequence or None, default=None
Metric to plot (only for multi-metric runs). Use a sequence or add + between options to select more than one. If None, the metric used to run the pipeline is selected.

title: str, dict or None, default=None
Title for the plot.

legend: str, dict or None, default="lower right"
Legend for the plot. See the user guide for an extended description of the choices.

  • If None: No legend is shown.
  • If str: Position to display the legend.
  • If dict: Legend configuration.

figsize: tuple, default=(900, 600)
Figure's size in pixels, format as (x, y).

filename: str, Path or None, default=None
Save the plot using this name. Use "auto" for automatic naming. The type of the file depends on the provided name (.html, .png, .pdf, etc...). If filename has no file type, the plot is saved as html. If None, the plot is not saved.

display: bool or None, default=True
Whether to render the plot. If None, it returns the figure.

Returnsgo.Figure or None
Plot object. Only returned if display=None.


See Also

plot_results

Compare metric results of the models.

plot_successive_halving

Plot scores per iteration of the successive halving.


Example

>>> from atom import ATOMClassifier
>>> from sklearn.datasets import load_breast_cancer

>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)

>>> atom = ATOMClassifier(X, y, random_state=1)
>>> atom.train_sizing(["LR", "RF"], n_bootstrap=5)
>>> atom.plot_learning_curve()