Skip to content

Scaler


class atom.data_cleaning.Scaler(strategy="standard", include_binary=False, device="cpu", engine="sklearn", verbose=0, logger=None, **kwargs)[source]
Scale the data.

Apply one of sklearn's scalers. Categorical columns are ignored.

This class can be accessed from atom through the scale method. Read more in the user guide.

Parametersstrategy: str, default="standard"
Strategy with which to scale the data. Choose from:

  • "standard": Remove mean and scale to unit variance.
  • "minmax": Scale features to a given range.
  • "maxabs": Scale features by their maximum absolute value.
  • "robust": Scale using statistics that are robust to outliers.

include_binary: bool, default=False
Whether to scale binary columns (only 0s and 1s).

device: str, default="cpu"
Device on which to train the estimators. Use any string that follows the SYCL_DEVICE_FILTER filter selector, e.g. device="gpu" to use the GPU. Read more in the user guide.

engine: str, default="sklearn"
Execution engine to use for the estimators. Refer to the user guide for an explanation regarding every choice. Choose from:

  • "sklearn" (only if device="cpu")
  • "cuml" (only if device="gpu")

verbose: int, default=0
Verbosity level of the class. Choose from:

  • 0 to not print anything.
  • 1 to print basic information.

logger: str, Logger or None, default=None

  • If None: Doesn't save a logging file.
  • If str: Name of the log file. Use "auto" for automatic naming.
  • Else: Python logging.Logger instance.

**kwargs
Additional keyword arguments for the strategy estimator.

Attributes[strategy]: sklearn transformer
Object with which the data is scaled.

feature_names_in_: np.array
Names of features seen during fit.

n_features_in_: int
Number of features seen during fit.


See Also

Balancer

Balance the number of samples per class in the target column.

Normalizer

Transform the data to follow a Normal/Gaussian distribution.

Scaler

Scale the data.


Example

>>> from atom import ATOMClassifier
>>> from sklearn.datasets import load_breast_cancer

>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)

>>> atom = ATOMClassifier(X, y)
>>> print(atom.dataset)

     mean radius  mean texture  ...  worst fractal dimension  target
0          17.99         10.38  ...                  0.11890       0
1          12.25         17.94  ...                  0.08132       1
2          13.87         20.70  ...                  0.08492       1
3          12.06         12.74  ...                  0.07898       1
4          12.62         17.15  ...                  0.07330       1
..           ...           ...  ...                      ...     ...
564        11.34         18.61  ...                  0.06783       1
565        11.43         17.31  ...                  0.08096       1
566        11.06         14.96  ...                  0.09080       1
567        13.20         15.82  ...                  0.08385       1
568        20.55         20.86  ...                  0.07569       0

[569 rows x 31 columns]

>>> atom.scale(verbose=2)

Fitting Scaler...
Scaling features...

>>> # Note the reduced number of rows
>>> print(atom.dataset)

     mean radius  mean texture  ...  worst fractal dimension  target
0       1.052603     -2.089926  ...                 1.952598       0
1      -0.529046     -0.336627  ...                -0.114004       1
2      -0.082657      0.303467  ...                 0.083968       1
3      -0.581401     -1.542600  ...                -0.242685       1
4      -0.427093     -0.519842  ...                -0.555040       1
..           ...           ...  ...                      ...     ...
564    -0.779796     -0.181242  ...                -0.855847       1
565    -0.754996     -0.482735  ...                -0.133801       1
566    -0.856949     -1.027742  ...                 0.407321       1
567    -0.267275     -0.828293  ...                 0.025126       1
568     1.758008      0.340573  ...                -0.423609       0

[569 rows x 31 columns]
>>> from atom.data_cleaning import Scaler
>>> from sklearn.datasets import load_breast_cancer

>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)

     mean radius  mean texture  ...  worst symmetry  worst fractal dimension
0          17.99         10.38  ...          0.4601                  0.11890
1          20.57         17.77  ...          0.2750                  0.08902
2          19.69         21.25  ...          0.3613                  0.08758
3          11.42         20.38  ...          0.6638                  0.17300
4          20.29         14.34  ...          0.2364                  0.07678
..           ...           ...  ...             ...                      ...
564        21.56         22.39  ...          0.2060                  0.07115
565        20.13         28.25  ...          0.2572                  0.06637
566        16.60         28.08  ...          0.2218                  0.07820
567        20.60         29.33  ...          0.4087                  0.12400
568         7.76         24.54  ...          0.2871                  0.07039

[569 rows x 30 columns]

>>> scaler = Scaler(verbose=2)
>>> X = scaler.fit_transform(X)

Fitting Scaler...
Scaling features...

>>> # Note the reduced number of rows
>>> print(X)

     mean radius  mean texture  ...  worst symmetry  worst fractal dimension
0       1.097064     -2.073335  ...        2.750622                 1.937015
1       1.829821     -0.353632  ...       -0.243890                 0.281190
2       1.579888      0.456187  ...        1.152255                 0.201391
3      -0.768909      0.253732  ...        6.046041                 4.935010
4       1.750297     -1.151816  ...       -0.868353                -0.397100
..           ...           ...  ...             ...                      ...
564     2.110995      0.721473  ...       -1.360158                -0.709091
565     1.704854      2.085134  ...       -0.531855                -0.973978
566     0.702284      2.045574  ...       -1.104549                -0.318409
567     1.838341      2.336457  ...        1.919083                 2.219635
568    -1.808401      1.221792  ...       -0.048138                -0.751207

[569 rows x 30 columns]


Methods

fitFit to data.
fit_transformFit to data, then transform it.
get_paramsGet parameters for this estimator.
inverse_transformApply the inverse transformation to the data.
logPrint message and save to log file.
saveSave the instance to a pickle file.
set_paramsSet the parameters of this estimator.
transformPerform standardization by centering and scaling.


method fit(X, y=None)[source]
Fit to data.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features).

y: int, str, sequence, dataframe-like or None, default=None
Does nothing. Implemented for continuity of the API.

ReturnsScaler
Estimator instance.



method fit_transform(X=None, y=None, **fit_params)[source]
Fit to data, then transform it.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

**fit_params
Additional keyword arguments for the fit method.

Returnsdataframe
Transformed feature set. Only returned if provided.

series
Transformed target column. Only returned if provided.



method get_params(deep=True)[source]
Get parameters for this estimator.

Parametersdeep : bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returnsparams : dict
Parameter names mapped to their values.



method inverse_transform(X, y=None)[source]
Apply the inverse transformation to the data.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features).

y: int, str, sequence, dataframe-like or None, default=None
Does nothing. Implemented for continuity of the API.

Returnsdataframe
Scaled dataframe.



method log(msg, level=0, severity="info")[source]
Print message and save to log file.

Parametersmsg: int, float or str
Message to save to the logger and print to stdout.

level: int, default=0
Minimum verbosity level to print the message.

severity: str, default="info"
Severity level of the message. Choose from: debug, info, warning, error, critical.



method save(filename="auto", save_data=True)[source]
Save the instance to a pickle file.

Parametersfilename: str, default="auto"
Name of the file. Use "auto" for automatic naming.

save_data: bool, default=True
Whether to save the dataset with the instance. This parameter is ignored if the method is not called from atom. If False, add the data to the load method.



method set_params(**params)[source]
Set the parameters of this estimator.

Parameters**params : dict
Estimator parameters.

Returnsself : estimator instance
Estimator instance.



method transform(X, y=None)[source]
Perform standardization by centering and scaling.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features).

y: int, str, sequence, dataframe-like or None, default=None
Does nothing. Implemented for continuity of the API.

Returnsdataframe
Scaled dataframe.