Skip to content

plot_permutation_importance


method plot_permutation_importance(models=None, show=None, n_repeats=10, title=None, legend="lower right", figsize=None, filename=None, display=True)[source]
Plot the feature permutation importance of models.

Warning

This method can be slow. Results are cached to fasten repeated calls.

Parametersmodels: int, str, Model, segment, sequence or None, default=None
Models to plot. If None, all models are selected.

show: int or None, default=None
Number of features (ordered by importance) to show. If None, it shows all features.

n_repeats: int, default=10
Number of times to permute each feature.

title: str, dict or None, default=None
Title for the plot.

legend: str, dict or None, default="lower right"
Legend for the plot. See the user guide for an extended description of the choices.

  • If None: No legend is shown.
  • If str: Position to display the legend.
  • If dict: Legend configuration.

figsize: tuple or None, default=None
Figure's size in pixels, format as (x, y). If None, it adapts the size to the number of features shown.

filename: str, Path or None, default=None
Save the plot using this name. Use "auto" for automatic naming. The type of the file depends on the provided name (.html, .png, .pdf, etc...). If filename has no file type, the plot is saved as html. If None, the plot is not saved.

display: bool or None, default=True
Whether to render the plot. If None, it returns the figure.

Returnsgo.Figure or None
Plot object. Only returned if display=None.


See Also

plot_feature_importance

Plot a model's feature importance.

plot_partial_dependence

Plot the partial dependence of features.

plot_parshap

Plot the partial correlation of shap values.


Example

>>> from atom import ATOMClassifier
>>> from sklearn.datasets import load_breast_cancer

>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)

>>> atom = ATOMClassifier(X, y, random_state=1)
>>> atom.run(["LR", "RF"])
>>> atom.plot_permutation_importance(show=10, n_repeats=7)