Pruner
class atom.data_cleaning.Pruner(strategy="zscore", method="drop", max_sigma=3, include_target=False, device="cpu", engine="sklearn", verbose=0, logger=None, **kwargs)[source]
Prune outliers from the data.
Replace or remove outliers. The definition of outlier depends on the selected strategy and can greatly differ from one another. Ignores categorical columns.
This class can be accessed from atom through the prune method. Read more in the user guide.
Info
The "sklearnex" and "cuml" engines are only supported for strategy="dbscan".
Parameters | strategy: str or sequence, default="zscore"
Strategy with which to select the outliers. If sequence of
strategies, only samples marked as outliers by all chosen
strategies are dropped. Choose from:
method: int, float or str, default="drop"
Method to apply on the outliers. Only the zscore strategy
accepts another method than "drop". Choose from:
max_sigma: int or float, default=3
Maximum allowed standard deviations from the mean of the
column. If more, it is considered an outlier. Only if
strategy="zscore".
include_target: bool, default=False
Whether to include the target column in the search for
outliers. This can be useful for regression tasks. Only
if strategy="zscore".
device: str, default="cpu"
Device on which to train the estimators. Use any string
that follows the SYCL_DEVICE_FILTER filter selector,
e.g. engine: str, default="sklearn"device="gpu" to use the GPU. Read more in the
user guide.
Execution engine to use for the estimators. Refer to the
user guide for an explanation
regarding every choice. Choose from:
verbose: int, default=0
Verbosity level of the class. Choose from:
logger: str, Logger or None, default=None
Additional keyword arguments for the strategy estimator. If
sequence of strategies, the params should be provided in a dict
with the strategy's name as key.
|
Attributes | [strategy]: sklearn estimator
Object used to prune the data, e.g. pruner.iforest for the
isolation forest strategy.
|
See Also
Balance the number of samples per class in the target column.
Transform the data to follow a Normal/Gaussian distribution.
Scale the data.
Example
>>> from atom import ATOMClassifier
>>> from sklearn.datasets import load_breast_cancer
>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)
>>> atom = ATOMClassifier(X, y)
>>> print(atom.dataset)
mean radius mean texture ... worst fractal dimension target
0 11.04 14.93 ... 0.07287 1
1 12.46 24.04 ... 0.20750 0
2 13.47 14.06 ... 0.09326 1
3 13.44 21.58 ... 0.07146 0
4 11.93 21.53 ... 0.08541 1
.. ... ... ... ... ...
564 14.54 27.54 ... 0.13410 0
565 18.66 17.12 ... 0.08456 0
566 10.95 21.35 ... 0.09606 0
567 17.01 20.26 ... 0.06469 0
568 12.40 17.68 ... 0.09359 1
[569 rows x 31 columns]
>>> atom.prune(stratgey="iforest", verbose=2)
Pruning outliers...
--> Dropping 46 outliers.
>>> # Note the reduced number of rows
>>> print(atom.dataset)
mean radius mean texture ... worst fractal dimension target
0 11.04 14.93 ... 0.07287 1
1 13.47 14.06 ... 0.09326 1
2 13.44 21.58 ... 0.07146 0
3 11.93 21.53 ... 0.08541 1
4 13.21 25.25 ... 0.06788 1
.. ... ... ... ... ...
518 14.54 27.54 ... 0.13410 0
519 18.66 17.12 ... 0.08456 0
520 10.95 21.35 ... 0.09606 0
521 17.01 20.26 ... 0.06469 0
522 12.40 17.68 ... 0.09359 1
[523 rows x 31 columns]
>>> atom.plot_distribution(columns=0)
>>> from atom.data_cleaning import Normalizer
>>> from sklearn.datasets import load_breast_cancer
>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)
mean radius mean texture ... worst symmetry worst fractal dimension
0 17.99 10.38 ... 0.4601 0.11890
1 20.57 17.77 ... 0.2750 0.08902
2 19.69 21.25 ... 0.3613 0.08758
3 11.42 20.38 ... 0.6638 0.17300
4 20.29 14.34 ... 0.2364 0.07678
.. ... ... ... ... ...
564 21.56 22.39 ... 0.2060 0.07115
565 20.13 28.25 ... 0.2572 0.06637
566 16.60 28.08 ... 0.2218 0.07820
567 20.60 29.33 ... 0.4087 0.12400
568 7.76 24.54 ... 0.2871 0.07039
[569 rows x 30 columns]
>>> normalizer = Normalizer(verbose=2)
>>> X = normalizer.fit_transform(X)
Fitting Pruner...
Pruning outliers...
--> Dropping 74 outliers.
>>> # Note the reduced number of rows
>>> print(X)
mean radius mean texture ... worst symmetry worst fractal dimension
1 20.57 17.77 ... 0.2750 0.08902
2 19.69 21.25 ... 0.3613 0.08758
4 20.29 14.34 ... 0.2364 0.07678
5 12.45 15.70 ... 0.3985 0.12440
6 18.25 19.98 ... 0.3063 0.08368
.. ... ... ... ... ...
560 14.05 27.15 ... 0.2250 0.08321
563 20.92 25.09 ... 0.2929 0.09873
564 21.56 22.39 ... 0.2060 0.07115
565 20.13 28.25 ... 0.2572 0.06637
566 16.60 28.08 ... 0.2218 0.07820
[495 rows x 30 columns]
Methods
fit | Does nothing. |
fit_transform | Fit to data, then transform it. |
get_params | Get parameters for this estimator. |
inverse_transform | Does nothing. |
log | Print message and save to log file. |
save | Save the instance to a pickle file. |
set_params | Set the parameters of this estimator. |
transform | Apply the outlier strategy on the data. |
method fit(X=None, y=None, **fit_params)[source]
Does nothing.
Implemented for continuity of the API.
method fit_transform(X=None, y=None, **fit_params)[source]
Fit to data, then transform it.
method get_params(deep=True)[source]
Get parameters for this estimator.
Parameters | deep : bool, default=True
If True, will return the parameters for this estimator and
contained subobjects that are estimators.
|
Returns | params : dict
Parameter names mapped to their values.
|
method inverse_transform(X=None, y=None)[source]
Does nothing.
method log(msg, level=0, severity="info")[source]
Print message and save to log file.
method save(filename="auto", save_data=True)[source]
Save the instance to a pickle file.
Parameters | filename: str, default="auto"
Name of the file. Use "auto" for automatic naming.
save_data: bool, default=True
Whether to save the dataset with the instance. This parameter
is ignored if the method is not called from atom. If False,
add the data to the load method.
|
method set_params(**params)[source]
Set the parameters of this estimator.
Parameters | **params : dict
Estimator parameters.
|
Returns | self : estimator instance
Estimator instance.
|
method transform(X, y=None)[source]
Apply the outlier strategy on the data.