Skip to content

Discretizer


class atom.data_cleaning.Discretizer(strategy="quantile", bins=5, labels=None, device="cpu", engine="sklearn", verbose=0, logger=None, random_state=None)[source]
Bin continuous data into intervals.

For each feature, the bin edges are computed during fit and, together with the number of bins, they define the intervals. Ignores categorical columns.

This class can be accessed from atom through the discretize method. Read more in the user guide.

Tip

The transformation returns categorical columns. Use the Encoder class to convert them back to numerical types.

Parametersstrategy: str, default="quantile"
Strategy used to define the widths of the bins. Choose from:

  • "uniform": All bins have identical widths.
  • "quantile": All bins have the same number of points.
  • "kmeans": Values in each bin have the same nearest center of a 1D k-means cluster.
  • "custom": Use custom bin edges provided through bins.

bins: int, sequence or dict, default=5
Bin number or bin edges in which to split every column.

  • If int: Number of bins to produce for all columns. Only for strategy!="custom".
  • If sequence:
    • For strategy!="custom": Number of bins per column, allowing for non-uniform width. The n-th value corresponds to the n-th column that is transformed. Note that categorical columns are automatically ignored.
    • For strategy="custom": Bin edges with length=n_bins - 1. The outermost edges are always -inf and +inf, e.g. bins [1, 2] indicate (-inf, 1], (1, 2], (2, inf].
  • If dict: One of the aforementioned options per column, where the key is the column's name.

labels: sequence, dict or None, default=None
Label names with which to replace the binned intervals.

  • If None: Use default labels of the form (min_edge, max_edge].
  • If sequence: Labels to use for all columns.
  • If dict: Labels per column, where the key is the column's name.

device: str, default="cpu"
Device on which to train the estimators. Use any string that follows the SYCL_DEVICE_FILTER filter selector, e.g. device="gpu" to use the GPU. Read more in the user guide.

engine: str, default="sklearn"
Execution engine to use for the estimators. Refer to the user guide for an explanation regarding every choice. Choose from:

  • "sklearn" (only if device="cpu")
  • "cuml" (only if device="gpu")

verbose: int, default=0
Verbosity level of the class. Choose from:

  • 0 to not print anything.
  • 1 to print basic information.
  • 2 to print detailed information.

logger: str, Logger or None, default=None

  • If None: Doesn't save a logging file.
  • If str: Name of the log file. Use "auto" for automatic naming.
  • Else: Python logging.Logger instance.

random_state: int or None, default=None
Seed used by the random number generator. If None, the random number generator is the RandomState used by np.random. Only for strategy="quantile".

Attributesfeature_names_in_: np.array
Names of features seen during fit.

n_features_in_: int
Number of features seen during fit.


See Also

Encoder

Perform encoding of categorical features.

Imputer

Handle missing values in the data.

Normalizer

Transform the data to follow a Normal/Gaussian distribution.


Example

>>> from atom import ATOMClassifier
>>> from sklearn.datasets import load_breast_cancer

>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)

>>> atom = ATOMClassifier(X, y)
>>> print(atom["mean radius"])

0      17.99
1      20.57
2      19.69
3      11.42
4      20.29
       ...
564    21.56
565    20.13
566    16.60
567    20.60
568     7.76

Name: mean radius, Length: 569, dtype: float64

>>> atom.discretize(
...     strategy="custom",
...     bins=[13, 18],
...     labels=["small", "medium", "large"],
...     verbose=2,
...     columns="mean radius",
... )

Fitting Discretizer...
Binning the features...
 --> Discretizing feature mean radius in 3 bins.

>>> print(atom["mean radius"])

0       small
1      medium
2      medium
3      medium
4       small
        ...
564     large
565     small
566     large
567     small
568     small

Name: mean radius, Length: 569, dtype: category
Categories (3, object): ['small' < 'medium' < 'large']
>>> from atom.data_cleaning import Discretizer
>>> from sklearn.datasets import load_breast_cancer

>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)
>>> print(X["mean radius"])

0      17.99
1      20.57
2      19.69
3      11.42
4      20.29
       ...
564    21.56
565    20.13
566    16.60
567    20.60
568     7.76

Name: mean radius, Length: 569, dtype: float64

>>> disc = Discretizer(
...     strategy="custom",
...     bins=[13, 18],
...     labels=["small", "medium", "large"],
...     verbose=2,
... )
>>> X["mean radius"] = disc.fit_transform(X[["mean radius"]])["mean radius"]

Fitting Discretizer...
Binning the features...
 --> Discretizing feature mean radius in 3 bins.

>>> print(X["mean radius"])

0       small
1      medium
2      medium
3      medium
4       small
        ...
564     large
565     small
566     large
567     small
568     small

Name: mean radius, Length: 569, dtype: category
Categories (3, object): ['small' < 'medium' < 'large']


Methods

fitFit to data.
fit_transformFit to data, then transform it.
get_paramsGet parameters for this estimator.
inverse_transformDoes nothing.
logPrint message and save to log file.
saveSave the instance to a pickle file.
set_paramsSet the parameters of this estimator.
transformBin the data into intervals.


method fit(X, y=None)[source]
Fit to data.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features).

y: int, str, sequence, dataframe-like or None, default=None
Does nothing. Implemented for continuity of the API.

ReturnsDiscretizer
Estimator instance.



method fit_transform(X=None, y=None, **fit_params)[source]
Fit to data, then transform it.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

**fit_params
Additional keyword arguments for the fit method.

Returnsdataframe
Transformed feature set. Only returned if provided.

series
Transformed target column. Only returned if provided.



method get_params(deep=True)[source]
Get parameters for this estimator.

Parametersdeep : bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returnsparams : dict
Parameter names mapped to their values.



method inverse_transform(X=None, y=None)[source]
Does nothing.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

Returnsdataframe
Transformed feature set. Only returned if provided.

series
Transformed target column. Only returned if provided.



method log(msg, level=0, severity="info")[source]
Print message and save to log file.

Parametersmsg: int, float or str
Message to save to the logger and print to stdout.

level: int, default=0
Minimum verbosity level to print the message.

severity: str, default="info"
Severity level of the message. Choose from: debug, info, warning, error, critical.



method save(filename="auto", save_data=True)[source]
Save the instance to a pickle file.

Parametersfilename: str, default="auto"
Name of the file. Use "auto" for automatic naming.

save_data: bool, default=True
Whether to save the dataset with the instance. This parameter is ignored if the method is not called from atom. If False, add the data to the load method.



method set_params(**params)[source]
Set the parameters of this estimator.

Parameters**params : dict
Estimator parameters.

Returnsself : estimator instance
Estimator instance.



method transform(X, y=None)[source]
Bin the data into intervals.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features).

y: int, str, sequence, dataframe-like or None, default=None
Does nothing. Implemented for continuity of the API.

Returnsdataframe
Transformed feature set.