Skip to content

Normalizer


class atom.data_cleaning.Normalizer(strategy="yeojohnson", device="cpu", engine=None, verbose=0, random_state=None, **kwargs)[source]
Transform the data to follow a Normal/Gaussian distribution.

This transformation is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. Missing values are disregarded in fit and maintained in transform. Categorical columns are ignored.

This class can be accessed from atom through the normalize method. Read more in the user guide.

Warning

The quantile strategy performs a non-linear transformation. This may distort linear correlations between variables measured at the same scale but renders variables measured at different scales more directly comparable.

Note

The yeojohnson and boxcox strategies scale the data after transforming. Use the kwargs to change this behavior.

Parametersstrategy: str, default="yeojohnson"
The transforming strategy. Choose from:

  • "yeojohnson"
  • "boxcox" (only works with strictly positive values)
  • "quantile": Transform features using quantiles information.

device: str, default="cpu"
Device on which to run the estimators. Use any string that follows the SYCL_DEVICE_FILTER filter selector, e.g. device="gpu" to use the GPU. Read more in the user guide.

engine: str or None, default=None
Execution engine to use for estimators. If None, the default value is used. Choose from:

  • "sklearn" (default)
  • "cuml"

verbose: int, default=0
Verbosity level of the class. Choose from:

  • 0 to not print anything.
  • 1 to print basic information.

random_state: int or None, default=None
Seed used by the quantile strategy. If None, the random number generator is the RandomState used by np.random.

**kwargs
Additional keyword arguments for the strategy estimator.

Attributes[strategy]_: sklearn transformer
Object with which the data is transformed, e.g., normalizer.yeojohnson for the default strategy.

feature_names_in_: np.ndarray
Names of features seen during fit.

n_features_in_: int
Number of features seen during fit.


See Also

Cleaner

Applies standard data cleaning steps on a dataset.

Pruner

Prune outliers from the data.

Scaler

Scale the data.


Example

>>> from atom import ATOMClassifier
>>> from sklearn.datasets import load_breast_cancer

>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)

>>> atom = ATOMClassifier(X, y, random_state=1)
>>> print(atom.dataset)

     mean radius  mean texture  mean perimeter  mean area  mean smoothness  mean compactness  mean concavity  mean concave points  mean symmetry  ...  worst perimeter  worst area  worst smoothness  worst compactness  worst concavity  worst concave points  worst symmetry  worst fractal dimension  target
0          13.48         20.82           88.40      559.2          0.10160           0.12550         0.10630              0.05439         0.1720  ...           107.30       740.4            0.1610            0.42250           0.5030               0.22580          0.2807                  0.10710       0
1          18.31         20.58          120.80     1052.0          0.10680           0.12480         0.15690              0.09451         0.1860  ...           142.20      1493.0            0.1492            0.25360           0.3759               0.15100          0.3074                  0.07863       0
2          17.93         24.48          115.20      998.9          0.08855           0.07027         0.05699              0.04744         0.1538  ...           135.10      1320.0            0.1315            0.18060           0.2080               0.11360          0.2504                  0.07948       0
3          15.13         29.81           96.71      719.5          0.08320           0.04605         0.04686              0.02739         0.1852  ...           110.10       931.4            0.1148            0.09866           0.1547               0.06575          0.3233                  0.06165       0
4           8.95         15.76           58.74      245.2          0.09462           0.12430         0.09263              0.02308         0.1305  ...            63.34       270.0            0.1179            0.18790           0.1544               0.03846          0.1652                  0.07722       1
..           ...           ...             ...        ...              ...               ...             ...                  ...            ...  ...              ...         ...               ...                ...              ...                   ...             ...                      ...     ...
564        14.34         13.47           92.51      641.2          0.09906           0.07624         0.05724              0.04603         0.2075  ...           110.40       873.2            0.1297            0.15250           0.1632               0.10870          0.3062                  0.06072       1
565        13.17         21.81           85.42      531.5          0.09714           0.10470         0.08259              0.05252         0.1746  ...           105.50       740.7            0.1503            0.39040           0.3728               0.16070          0.3693                  0.09618       0
566        17.30         17.08          113.00      928.2          0.10080           0.10410         0.12660              0.08353         0.1813  ...           130.90      1222.0            0.1416            0.24050           0.3378               0.18570          0.3138                  0.08113       0
567        17.68         20.74          117.40      963.7          0.11150           0.16650         0.18550              0.10540         0.1971  ...           132.90      1302.0            0.1418            0.34980           0.3583               0.15150          0.2463                  0.07738       0
568        14.80         17.66           95.88      674.8          0.09179           0.08890         0.04069              0.02260         0.1893  ...           105.90       829.5            0.1226            0.18810           0.2060               0.08308          0.3600                  0.07285       1

[569 rows x 31 columns]


>>> atom.plot_distribution(columns=0)

>>> atom.normalize(verbose=2)

Fitting Normalizer...
Normalizing features...


>>> print(atom.dataset)

     mean radius  mean texture  mean perimeter  mean area  mean smoothness  mean compactness  mean concavity  mean concave points  mean symmetry  ...  worst perimeter  worst area  worst smoothness  worst compactness  worst concavity  worst concave points  worst symmetry  worst fractal dimension  target
0      -0.017068      0.464087        0.031104  -0.020222         0.390628          0.620790        0.562136             0.426774      -0.280554  ...         0.251532    0.081524          1.224389           1.206519         1.189835              1.522769       -0.043007                 1.378960       0
1       1.182066      0.411242        1.183030   1.200556         0.741209          0.608244        1.100342             1.256472       0.256014  ...         1.119375    1.218096          0.759546           0.244492         0.726989              0.650523        0.424017                -0.164104       0
2       1.105309      1.197684        1.018344   1.106437        -0.552214         -0.652544       -0.230044             0.226950      -1.050816  ...         0.973194    1.037232          0.002307          -0.374986        -0.128679              0.107299       -0.647198                -0.100126       0
3       0.455144      2.077941        0.379512   0.486019        -0.966587         -1.447057       -0.438308            -0.480189       0.226570  ...         0.337722    0.483003         -0.785100          -1.301043        -0.483292             -0.722786        0.676588                -1.783846       0
4      -1.898537     -0.815757       -1.745528  -1.873415        -0.102067          0.599235        0.374346            -0.662103      -2.173761  ...        -1.869111   -2.095123         -0.633206          -0.305478        -0.485431             -1.278472       -2.898859                -0.273347       1
..           ...           ...             ...        ...              ...               ...             ...                  ...            ...  ...              ...         ...               ...                ...              ...                   ...             ...                      ...     ...
564     0.238929     -1.546154        0.209113   0.257899         0.214334         -0.482480       -0.225132             0.183841       0.996371  ...         0.346743    0.373205         -0.079012          -0.660736        -0.423384              0.029761        0.404215                -1.894769       1
565    -0.115233      0.675396       -0.105672  -0.125511         0.078814          0.213069        0.222118             0.375009      -0.177404  ...         0.194134    0.082260          0.804177           1.061384         0.714032              0.778530        1.315113                 0.913117       0
566     0.972621     -0.443853        0.950416   0.971288         0.335466          0.200161        0.804757             1.074782       0.080964  ...         0.880583    0.920102          0.443592           0.144776         0.561298              1.086695        0.527842                 0.020173       0
567     1.053489      0.446545        1.084407   1.040647         1.046541          1.237987        1.321388             1.410770       0.650180  ...         0.925288    1.016604          0.452080           0.855688         0.652219              0.657243       -0.735710                -0.260751       0
568     0.366875     -0.289945        0.346701   0.359700        -0.309357         -0.150999       -0.574459            -0.683107       0.375972  ...         0.207028    0.284140         -0.407994          -0.303600        -0.141124             -0.402554        1.196110                -0.638106       1

[569 rows x 31 columns]


>>> atom.plot_distribution(columns=0)
>>> from atom.data_cleaning import Normalizer
>>> from sklearn.datasets import load_breast_cancer

>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)

>>> normalizer = Normalizer(verbose=2)
>>> X = normalizer.fit_transform(X)

Fitting Normalizer...
Normalizing features...


>>> print(X)

     mean radius  mean texture  mean perimeter  mean area  mean smoothness  mean compactness  mean concavity  mean concave points  mean symmetry  ...  worst texture  worst perimeter  worst area  worst smoothness  worst compactness  worst concavity  worst concave points  worst symmetry  worst fractal dimension
0       1.134881     -2.678666        1.259822   1.126421         1.504114          2.165938        1.862988             1.848558       1.953067  ...      -1.488367         1.810506    1.652210          1.282792           1.942737         1.730182              1.935654        2.197206                 1.723624
1       1.619346     -0.264377        1.528723   1.633946        -0.820227         -0.384102        0.291976             0.820609       0.102291  ...      -0.288382         1.430616    1.610022         -0.325080          -0.296580         0.070746              1.101594       -0.121997                 0.537179
2       1.464796      0.547806        1.454664   1.461645         0.963977          1.163977        1.403673             1.683104       0.985668  ...       0.071406         1.321941    1.425307          0.580301           1.209701         1.005512              1.722744        1.218181                 0.453955
3      -0.759262      0.357721       -0.514886  -0.836238         2.781494          2.197843        1.642391             1.423004       2.360528  ...       0.228089        -0.039480   -0.436860          2.857821           2.282276         1.675087              1.862378        3.250202                 2.517606
4       1.571260     -1.233520        1.583340   1.595120         0.343932          0.762392        1.407479             1.410929       0.090964  ...      -1.637882         1.316582    1.309486          0.284367          -0.131829         0.817474              0.807077       -0.943554                -0.279402
..           ...           ...             ...        ...              ...               ...             ...                  ...            ...  ...            ...              ...         ...               ...                ...              ...                   ...             ...                      ...
564     1.781795      0.785604        1.746492   1.823030         1.052829          0.460810        1.653784             1.783067      -0.232645  ...       0.212151         1.547961    1.657442          0.438013          -0.077871         0.859079              1.503734       -1.721528                -0.751459
565     1.543335      1.845150        1.485601   1.545430         0.168014          0.207602        0.984746             1.320730      -0.129120  ...       1.832201         1.365939    1.443167         -0.667317          -0.245277         0.480804              0.810995       -0.480093                -1.210527
566     0.828589      1.817618        0.811329   0.835270        -0.835509          0.183969        0.375105             0.396882      -0.808189  ...       1.320625         0.786129    0.796192         -0.799337           0.626487         0.566826              0.526136       -1.301164                -0.170872
567     1.624440      2.016299        1.702747   1.551036         1.468642          2.162820        1.994466             1.884414       1.899087  ...       1.968949         1.810506    1.513198          1.387135           2.284642         2.136932              1.931990        1.744693                 1.850944
568    -2.699432      1.203224       -2.827766  -2.703256        -3.834325         -1.481409       -1.658319            -1.845392      -0.821560  ...       0.810681        -2.231436   -2.149403         -2.064647          -1.731936        -1.819966             -2.131070        0.103122                -0.820663

[569 rows x 30 columns]


Methods

fitFit to data.
fit_transformFit to data, then transform it.
get_feature_names_outGet output feature names for transformation.
get_paramsGet parameters for this estimator.
inverse_transformApply the inverse transformation to the data.
set_outputSet output container.
set_paramsSet the parameters of this estimator.
transformApply the transformations to the data.


method fit(X, y=None)[source]
Fit to data.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features).

y: sequence, dataframe-like or None, default=None
Do nothing. Implemented for continuity of the API.

ReturnsSelf
Estimator instance.



method fit_transform(X=None, y=None, **fit_params)[source]
Fit to data, then transform it.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: sequence, dataframe-like or None, default=None
Target column(s) corresponding to X. If None, y is ignored.

**fit_params
Additional keyword arguments for the fit method.

Returnsdataframe
Transformed feature set. Only returned if provided.

series or dataframe
Transformed target column. Only returned if provided.



method get_feature_names_out(input_features=None)[source]
Get output feature names for transformation.

Parametersinput_features : array-like of str or None, default=None
Input features.

  • If input_features is None, then feature_names_in_ is used as feature names in. If feature_names_in_ is not defined, then the following input feature names are generated: ["x0", "x1", ..., "x(n_features_in_ - 1)"].
  • If input_features is an array-like, then input_features must match feature_names_in_ if feature_names_in_ is defined.

Returnsfeature_names_out : ndarray of str objects
Same as input features.



method get_params(deep=True)[source]
Get parameters for this estimator.

Parametersdeep : bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returnsparams : dict
Parameter names mapped to their values.



method inverse_transform(X, y=None)[source]
Apply the inverse transformation to the data.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features).

y: sequence, dataframe-like or None, default=None
Do nothing. Implemented for continuity of the API.

Returnsdataframe
Original dataframe.



method set_output(transform=None)[source]
Set output container.

See sklearn's user guide on how to use the set_output API. See here a description of the choices.

Parameterstransform: str or None, default=None
Configure the output of the transform, fit_transform, and inverse_transform method. If None, the configuration is not changed. Choose from:

  • "numpy"
  • "pandas" (default)
  • "pandas-pyarrow"
  • "polars"
  • "polars-lazy"
  • "pyarrow"
  • "modin"
  • "dask"
  • "pyspark"
  • "pyspark-pandas"

ReturnsSelf
Estimator instance.



method set_params(**params)[source]
Set the parameters of this estimator.

Parameters**params : dict
Estimator parameters.

Returnsself : estimator instance
Estimator instance.



method transform(X, y=None)[source]
Apply the transformations to the data.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features).

y: sequence, dataframe-like or None, default=None
Do nothing. Implemented for continuity of the API.

Returnsdataframe
Normalized dataframe.