plot_feature_importance
method plot_feature_importance(models=None, show=None, title=None, legend="lower right", figsize=None, filename=None, display=True)[source]
Plot a model's feature importance.
The sum of importances for all features (per model) is 1.
This plot is available only for models whose estimator has
a scores_
, feature_importances_
or coef
attribute.
Parameters |
models: int, str, Model, segment, sequence or None, default=None
Models to plot. If None, all models are selected.
show: int or None, default=None
Number of features (ordered by importance) to show. If
None, it shows all features.
title: str, dict or None, default=None
Title for the plot.
legend: str, dict or None, default="lower right"
Legend for the plot. See the user guide for
an extended description of the choices.
figsize: tuple or None, default=None
Figure's size in pixels, format as (x, y). If None, it
adapts the size to the number of features shown.
filename: str, Path or None, default=None
Save the plot using this name. Use "auto" for automatic
naming. The type of the file depends on the provided name
(.html, .png, .pdf, etc...). If
display: bool or None, default=Truefilename has no file type,
the plot is saved as html. If None, the plot is not saved.
Whether to render the plot. If None, it returns the figure.
|
Returns | {#plot_feature_importance-go.Figure or None}
go.Figure or None
Plot object. Only returned if display=None .
|
See Also
Plot the partial correlation of shap values.
Plot the partial dependence of features.
Plot the feature permutation importance of models.
Example
>>> from atom import ATOMClassifier
>>> from sklearn.datasets import load_breast_cancer
>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)
>>> atom = ATOMClassifier(X, y, random_state=1)
>>> atom.run(["LR", "RF"])
>>> atom.plot_feature_importance(show=10)