plot_calibration
Well calibrated classifiers are probabilistic classifiers for
which the output of the predict_proba
method can be directly
interpreted as a confidence level. For instance a well
calibrated (binary) classifier should classify the samples such
that among the samples to which it gave a predict_proba
value
close to 0.8, approx. 80% actually belong to the positive class.
Read more in sklearn's documentation.
This figure shows two plots: the calibration curve, where the x-axis represents the average predicted probability in each bin and the y-axis is the fraction of positives, i.e. the proportion of samples whose class is the positive class (in each bin); and a distribution of all predicted probabilities of the classifier. Only available for binary classification tasks.
Tip
Use the calibrate method to calibrate the winning model.
Parameters | models: int, str, Model, slice, sequence or None, default=None
Models to plot. If None, all models are selected.
n_bins: int, default=10
Number of bins used for calibration. Minimum of 5 required.
title: str, dict or None, default=None
Title for the plot.
legend: str, dict or None, default="upper left"
Legend for the plot. See the user guide for
an extended description of the choices.
figsize: tuple, default=(900, 900)
Figure's size in pixels, format as (x, y).
filename: str or None, default=None
Save the plot using this name. Use "auto" for automatic
naming. The type of the file depends on the provided name
(.html, .png, .pdf, etc...). If display: bool or None, default=Truefilename has no file type,
the plot is saved as html. If None, the plot is not saved.
Whether to render the plot. If None, it returns the figure.
|
Returns | go.Figure or None
Plot object. Only returned if display=None .
|
See Also
Example
>>> from atom import ATOMClassifier
>>> X = pd.read_csv("./examples/datasets/weatherAUS.csv")
>>> atom = ATOMClassifier(X, y="RainTomorrow", n_rows=1e4)
>>> atom.impute()
>>> atom.encode()
>>> atom.run(["RF", "LGB"])
>>> atom.plot_calibration()