plot_confusion_matrix
method plot_confusion_matrix(models=None, dataset="test", title=None, legend="upper right", figsize=None, filename=None, display=True)[source]
Plot a model's confusion matrix.
For one model, the plot shows a heatmap. For multiple models, it compares TP, FP, FN and TN in a barplot (not implemented for multiclass classification tasks). This plot is available only for classification tasks.
Parameters | models: int, str, Model, slice, sequence or None, default=None
Models to plot. If None, all models are selected.
dataset: str, default="test"
Data set on which to calculate the confusion matrix. Choose
from:` "train", "test" or "holdout".
title: str, dict or None, default=None
Title for the plot.
legend: str, dict or None, default="upper right"
Legend for the plot. See the user guide for
an extended description of the choices.
figsize: tuple or None, default=None
Figure's size in pixels, format as (x, y). If None, it
adapts the size to the plot's type.
filename: str or None, default=None
Save the plot using this name. Use "auto" for automatic
naming. The type of the file depends on the provided name
(.html, .png, .pdf, etc...). If display: bool or None, default=Truefilename has no file type,
the plot is saved as html. If None, the plot is not saved.
Whether to render the plot. If None, it returns the figure.
|
Returns | go.Figure or None
Plot object. Only returned if display=None .
|
See Also
Plot the calibration curve for a binary classifier.
Plot metric performances against threshold values.
Example
>>> from atom import ATOMClassifier
>>> X = pd.read_csv("./examples/datasets/weatherAUS.csv")
>>> atom = ATOMClassifier(X, y="RainTomorrow", n_rows=1e4)
>>> atom.impute()
>>> atom.encode()
>>> atom.run(["LR", "RF"])
>>> atom.lr.plot_confusion_matrix() # For one model
>>> atom.plot_confusion_matrix() # For multiple models