Scaler
class atom.data_cleaning.Scaler(strategy="standard", include_binary=False, device="cpu", engine="sklearn", verbose=0, logger=None, **kwargs)[source]
Scale the data.
Apply one of sklearn's scalers. Categorical columns are ignored.
This class can be accessed from atom through the scale method. Read more in the user guide.
Parameters | strategy: str, default="standard"
Strategy with which to scale the data. Choose from:
include_binary: bool, default=False
Whether to scale binary columns (only 0s and 1s).
device: str, default="cpu"
Device on which to train the estimators. Use any string
that follows the SYCL_DEVICE_FILTER filter selector,
e.g. engine: str, default="sklearn"device="gpu" to use the GPU. Read more in the
user guide.
Execution engine to use for the estimators. Refer to the
user guide for an explanation
regarding every choice. Choose from:
verbose: int, default=0
Verbosity level of the class. Choose from:
logger: str, Logger or None, default=None
Additional keyword arguments for the strategy estimator.
|
Attributes | [strategy]: sklearn transformer
Object with which the data is scaled.
feature_names_in_: np.array
Names of features seen during fit.
n_features_in_: int
Number of features seen during fit.
|
See Also
Balance the number of samples per class in the target column.
Transform the data to follow a Normal/Gaussian distribution.
Scale the data.
Example
>>> from atom import ATOMClassifier
>>> from sklearn.datasets import load_breast_cancer
>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)
>>> atom = ATOMClassifier(X, y)
>>> print(atom.dataset)
mean radius mean texture ... worst fractal dimension target
0 17.99 10.38 ... 0.11890 0
1 12.25 17.94 ... 0.08132 1
2 13.87 20.70 ... 0.08492 1
3 12.06 12.74 ... 0.07898 1
4 12.62 17.15 ... 0.07330 1
.. ... ... ... ... ...
564 11.34 18.61 ... 0.06783 1
565 11.43 17.31 ... 0.08096 1
566 11.06 14.96 ... 0.09080 1
567 13.20 15.82 ... 0.08385 1
568 20.55 20.86 ... 0.07569 0
[569 rows x 31 columns]
>>> atom.scale(verbose=2)
Fitting Scaler...
Scaling features...
>>> # Note the reduced number of rows
>>> print(atom.dataset)
mean radius mean texture ... worst fractal dimension target
0 1.052603 -2.089926 ... 1.952598 0
1 -0.529046 -0.336627 ... -0.114004 1
2 -0.082657 0.303467 ... 0.083968 1
3 -0.581401 -1.542600 ... -0.242685 1
4 -0.427093 -0.519842 ... -0.555040 1
.. ... ... ... ... ...
564 -0.779796 -0.181242 ... -0.855847 1
565 -0.754996 -0.482735 ... -0.133801 1
566 -0.856949 -1.027742 ... 0.407321 1
567 -0.267275 -0.828293 ... 0.025126 1
568 1.758008 0.340573 ... -0.423609 0
[569 rows x 31 columns]
>>> from atom.data_cleaning import Scaler
>>> from sklearn.datasets import load_breast_cancer
>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)
mean radius mean texture ... worst symmetry worst fractal dimension
0 17.99 10.38 ... 0.4601 0.11890
1 20.57 17.77 ... 0.2750 0.08902
2 19.69 21.25 ... 0.3613 0.08758
3 11.42 20.38 ... 0.6638 0.17300
4 20.29 14.34 ... 0.2364 0.07678
.. ... ... ... ... ...
564 21.56 22.39 ... 0.2060 0.07115
565 20.13 28.25 ... 0.2572 0.06637
566 16.60 28.08 ... 0.2218 0.07820
567 20.60 29.33 ... 0.4087 0.12400
568 7.76 24.54 ... 0.2871 0.07039
[569 rows x 30 columns]
>>> scaler = Scaler(verbose=2)
>>> X = scaler.fit_transform(X)
Fitting Scaler...
Scaling features...
>>> # Note the reduced number of rows
>>> print(X)
mean radius mean texture ... worst symmetry worst fractal dimension
0 1.097064 -2.073335 ... 2.750622 1.937015
1 1.829821 -0.353632 ... -0.243890 0.281190
2 1.579888 0.456187 ... 1.152255 0.201391
3 -0.768909 0.253732 ... 6.046041 4.935010
4 1.750297 -1.151816 ... -0.868353 -0.397100
.. ... ... ... ... ...
564 2.110995 0.721473 ... -1.360158 -0.709091
565 1.704854 2.085134 ... -0.531855 -0.973978
566 0.702284 2.045574 ... -1.104549 -0.318409
567 1.838341 2.336457 ... 1.919083 2.219635
568 -1.808401 1.221792 ... -0.048138 -0.751207
[569 rows x 30 columns]
Methods
fit | Fit to data. |
fit_transform | Fit to data, then transform it. |
get_params | Get parameters for this estimator. |
inverse_transform | Apply the inverse transformation to the data. |
log | Print message and save to log file. |
save | Save the instance to a pickle file. |
set_params | Set the parameters of this estimator. |
transform | Perform standardization by centering and scaling. |
method fit(X, y=None)[source]
Fit to data.
method fit_transform(X=None, y=None, **fit_params)[source]
Fit to data, then transform it.
method get_params(deep=True)[source]
Get parameters for this estimator.
Parameters | deep : bool, default=True
If True, will return the parameters for this estimator and
contained subobjects that are estimators.
|
Returns | params : dict
Parameter names mapped to their values.
|
method inverse_transform(X, y=None)[source]
Apply the inverse transformation to the data.
method log(msg, level=0, severity="info")[source]
Print message and save to log file.
method save(filename="auto", save_data=True)[source]
Save the instance to a pickle file.
Parameters | filename: str, default="auto"
Name of the file. Use "auto" for automatic naming.
save_data: bool, default=True
Whether to save the dataset with the instance. This
parameter is ignored if the method is not called from
atom. If False, remember to add the data to ATOMLoader
when loading the file.
|
method set_params(**params)[source]
Set the parameters of this estimator.
Parameters | **params : dict
Estimator parameters.
|
Returns | self : estimator instance
Estimator instance.
|
method transform(X, y=None)[source]
Perform standardization by centering and scaling.