Skip to content

FeatureSelector


class atom.feature_engineering.FeatureSelector(strategy=None, solver=None, n_features=None, min_repeated=2, max_repeated=1.0, max_correlation=1.0, n_jobs=1, device="cpu", engine="sklearn", backend="loky", verbose=0, logger=None, random_state=None, **kwargs)[source]
Reduce the number of features in the data.

Apply feature selection or dimensionality reduction, either to improve the estimators' accuracy or to boost their performance on very high-dimensional datasets. Additionally, remove multicollinear and low variance features.

This class can be accessed from atom through the feature_selection method. Read more in the user guide.

Warning

  • Ties between features with equal scores are broken in an unspecified way.
  • For strategy="rfecv", the n_features parameter is the minimum number of features to select, not the actual number of features that the transformer returns. It may very well be that it returns more!

Info

  • The "sklearnex" and "cuml" engines are only supported for strategy="pca" with dense datasets.
  • If strategy="pca" and the data is dense and unscaled, it's scaled to mean=0 and std=1 before fitting the PCA transformer.
  • If strategy="pca" and the provided data is sparse, the used estimator is TruncatedSVD, which works more efficiently with sparse matrices.

Tip

Use the plot_feature_importance method to examine how much a specific feature contributes to the final predictions. If the model doesn't have a feature_importances_ attribute, use plot_permutation_importance instead.

Parametersstrategy: str or None, default=None
Feature selection strategy to use. Choose from:

  • None: Do not perform any feature selection strategy.
  • "univariate": Univariate statistical F-test.
  • "pca": Principal Component Analysis.
  • "sfm": Select best features according to a model.
  • "sfs": Sequential Feature Selection.
  • "rfe": Recursive Feature Elimination.
  • "rfecv": RFE with cross-validated selection.
  • "pso": Particle Swarm Optimization.
  • "hho": Harris Hawks Optimization.
  • "gwo": Grey Wolf Optimization.
  • "dfo": Dragonfly Optimization.
  • "go": Genetic Optimization.

solver: str, estimator or None, default=None
Solver/estimator to use for the feature selection strategy. See the corresponding documentation for an extended description of the choices. If None, the default value is used (only if strategy="pca"). Choose from:

  • If strategy="univariate":

  • If strategy="pca":

    • If data is dense:

      • If engine="sklearn":

        • "auto" (default)
        • "full"
        • "arpack"
        • "randomized"
      • If engine="sklearnex":

        • "full" (default)
      • If engine="cuml":

        • "full" (default)
        • "jacobi"
    • If data is sparse:

      • "randomized" (default)
      • "arpack"
  • for the remaining strategies:
    The base estimator. For sfm, rfe and rfecv, it should have either a feature_importances_ or coef_ attribute after fitting. You can use one of the predefined models. Add _class or _reg after the model's name to specify a classification or regression task, e.g. solver="LGB_reg" (not necessary if called from atom). No default option.

n_features: int, float or None, default=None
Number of features to select.

  • If None: Select all features.
  • If <1: Fraction of the total features to select.
  • If >=1: Number of features to select.

If strategy="sfm" and the threshold parameter is not specified, the threshold is automatically set to -inf to select n_features number of features.

If strategy="rfecv", n_features is the minimum number of features to select.

This parameter is ignored if any of the following strategies is selected: pso, hho, gwo, dfo, go.

min_repeated: int, float or None, default=2
Remove categorical features if there isn't any repeated value in at least min_repeated rows. The default is to keep all features with non-maximum variance, i.e. remove the features which number of unique values is equal to the number of rows (usually the case for names, IDs, etc...).

  • If None: No check for minimum repetition.
  • If >1: Minimum repetition number.
  • If <=1: Minimum repetition fraction.

max_repeated: int, float or None, default=1.0
Remove categorical features with the same value in at least max_repeated rows. The default is to keep all features with non-zero variance, i.e. remove the features that have the same value in all samples.

  • If None: No check for maximum repetition.
  • If >1: Maximum number of repeated occurences.
  • If <=1: Maximum fraction of repeated occurences.

max_correlation: float or None, default=1.0
Minimum absolute Pearson correlation to identify correlated features. For each group, it removes all except the feature with the highest correlation to y (if provided, else it removes all but the first). The default value removes equal columns. If None, skip this step.

n_jobs: int, default=1
Number of cores to use for parallel processing.

  • If >0: Number of cores to use.
  • If -1: Use all available cores.
  • If <-1: Use number of cores - 1 + n_jobs.

device: str, default="cpu"
Device on which to train the estimators. Use any string that follows the SYCL_DEVICE_FILTER filter selector, e.g. device="gpu" to use the GPU. Read more in the user guide.

engine: str, default="sklearn"
Execution engine to use for the estimators. Refer to the user guide for an explanation regarding every choice. Choose from:

  • "sklearn" (only if device="cpu")
  • "sklearnex"
  • "cuml" (only if device="gpu")

backend: str, default="loky"
Parallelization backend. Choose from:

  • "loky": Single-node, process-based parallelism.
  • "multiprocessing": Legacy single-node, process-based parallelism. Less robust than 'loky'.
  • "threading": Single-node, thread-based parallelism.
  • "ray": Multi-node, process-based parallelism.

verbose: int, default=0
Verbosity level of the class. Choose from:

  • 0 to not print anything.
  • 1 to print basic information.
  • 2 to print detailed information.

logger: str, Logger or None, default=None

  • If None: Doesn't save a logging file.
  • If str: Name of the log file. Use "auto" for automatic naming.
  • Else: Python logging.Logger instance.

random_state: int or None, default=None
Seed used by the random number generator. If None, the random number generator is the RandomState used by np.random.

**kwargs
Any extra keyword argument for the strategy estimator. See the corresponding documentation for the available options.

Attributescollinear: pd.DataFrame
Information on the removed collinear features. Columns include:

  • drop: Name of the dropped feature.
  • corr_feature: Names of the correlated features.
  • corr_value: Corresponding correlation coefficients.

[strategy]: sklearn transformer
Object used to transform the data, e.g. fs.pca for the pca strategy.

feature_names_in_: np.array
Names of features seen during fit.

n_features_in_: int
Number of features seen during fit.


See Also

FeatureExtractor

Extract features from datetime columns.

FeatureGenerator

Generate new features.

FeatureGrouper

Extract statistics from similar features.


Example

>>> from atom import ATOMClassifier
>>> from sklearn.datasets import load_breast_cancer

>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)

>>> atom = ATOMClassifier(X, y)
>>> atom.feature_selection(strategy="pca", n_features=12, verbose=2)

Fitting FeatureSelector...
Performing feature selection ...
 --> Applying Principal Component Analysis...
   --> Scaling features...
   --> Keeping 12 components.
   --> Explained variance ratio: 0.97

>>> # Note that the column names changed
>>> print(atom.dataset)

         pca0      pca1      pca2  ...     pca10     pca11  target
0   -2.493723  3.082653  1.318595  ... -0.182142 -0.591784       1
1    4.596102 -0.876940 -0.380685  ...  0.224170  1.155544       0
2    0.955979 -2.141057 -1.677736  ...  0.306153  0.099138       0
3    3.221488  4.209911 -2.818757  ...  0.808883 -0.531868       0
4    1.038000  2.451758 -1.753683  ... -0.312883  0.862319       1
..        ...       ...       ...  ...       ...       ...     ...
564  3.414827 -3.757253 -1.012369  ...  0.387175  0.283633       0
565 -1.191561 -1.276069 -0.871712  ...  0.106362 -0.449361       1
566 -2.757000  0.411997 -1.321697  ...  0.185550 -0.025368       1
567 -3.252533  0.074827  0.549622  ...  0.693073 -0.058251       1
568  1.607258 -2.076465 -1.025986  ... -0.385542  0.103603       0
[569 rows x 13 columns]

>>> atom.plot_pca()

>>> from atom.feature_engineering import FeatureSelector
>>> from sklearn.datasets import load_breast_cancer

>>> X, _ = load_breast_cancer(return_X_y=True, as_frame=True)

>>> fs = FeatureSelector(strategy="pca", n_features=12, verbose=2)
>>> X = fs.fit_transform(X)

Fitting FeatureSelector...
Performing feature selection ...
 --> Applying Principal Component Analysis...
   --> Scaling features...
   --> Keeping 12 components.
   --> Explained variance ratio: 0.97

>>> # Note that the column names changed
>>> print(X)

          pca0       pca1      pca2  ...      pca9     pca10     pca11
0     9.192837   1.948583 -1.123166  ... -0.877402  0.262955 -0.859014
1     2.387802  -3.768172 -0.529293  ...  1.106995  0.813120  0.157923
2     5.733896  -1.075174 -0.551748  ...  0.454275 -0.605604  0.124387
3     7.122953  10.275589 -3.232790  ... -1.116975 -1.151514  1.011316
4     3.935302  -1.948072  1.389767  ...  0.377704  0.651360 -0.110515
..         ...        ...       ...  ...       ...       ...       ...
564   6.439315  -3.576817  2.459487  ...  0.256989 -0.062651  0.123342
565   3.793382  -3.584048  2.088476  ... -0.108632  0.244804  0.222753
566   1.256179  -1.902297  0.562731  ...  0.520877 -0.840512  0.096473
567  10.374794   1.672010 -1.877029  ... -0.089296 -0.178628 -0.697461
568  -5.475243  -0.670637  1.490443  ... -0.047726 -0.144094 -0.179496
[569 rows x 12 columns]


Methods

fitFit the feature selector to the data.
fit_transformFit to data, then transform it.
get_paramsGet parameters for this estimator.
inverse_transformDoes nothing.
logPrint message and save to log file.
plot_componentsPlot the explained variance ratio per component.
plot_pcaPlot the explained variance ratio vs number of components.
plot_rfecvPlot the rfecv results.
reset_aestheticsReset the plot aesthetics to their default values.
saveSave the instance to a pickle file.
set_paramsSet the parameters of this estimator.
transformTransform the data.
update_layoutUpdate the properties of the plot's layout.


method fit(X, y=None)[source]
Fit the feature selector to the data.

The univariate, sfm (when model is not fitted), sfs, rfe and rfecv strategies need a target column. Leaving it None raises an exception.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features).

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

Returnsself
Estimator instance.



method fit_transform(X=None, y=None, **fit_params)[source]
Fit to data, then transform it.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

**fit_params
Additional keyword arguments for the fit method.

Returnsdataframe
Transformed feature set. Only returned if provided.

series
Transformed target column. Only returned if provided.



method get_params(deep=True)[source]
Get parameters for this estimator.

Parametersdeep : bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returnsparams : dict
Parameter names mapped to their values.



method inverse_transform(X=None, y=None)[source]
Does nothing.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

Returnsdataframe
Transformed feature set. Only returned if provided.

series
Transformed target column. Only returned if provided.



method log(msg, level=0, severity="info")[source]
Print message and save to log file.

Parametersmsg: int, float or str
Message to save to the logger and print to stdout.

level: int, default=0
Minimum verbosity level to print the message.

severity: str, default="info"
Severity level of the message. Choose from: debug, info, warning, error, critical.



method plot_components(show=None, title=None, legend="lower right", figsize=None, filename=None, display=True)[source]
Plot the explained variance ratio per component.

Kept components are colored and discarted components are transparent. This plot is available only when feature selection was applied with strategy="pca".

Parametersshow: int or None, default=None
Number of components to show. None to show all.

title: str, dict or None, default=None
Title for the plot.

legend: str, dict or None, default="lower right"
Legend for the plot. See the user guide for an extended description of the choices.

  • If None: No legend is shown.
  • If str: Location where to show the legend.
  • If dict: Legend configuration.

figsize: tuple or None, default=None
Figure's size in pixels, format as (x, y). If None, it adapts the size to the number of components shown.

filename: str or None, default=None
Save the plot using this name. Use "auto" for automatic naming. The type of the file depends on the provided name (.html, .png, .pdf, etc...). If filename has no file type, the plot is saved as html. If None, the plot is not saved.

display: bool or None, default=True
Whether to render the plot. If None, it returns the figure.

Returnsgo.Figure or None
Plot object. Only returned if display=None.



method plot_pca(title=None, legend=None, figsize=(900, 600), filename=None, display=True)[source]
Plot the explained variance ratio vs number of components.

If the underlying estimator is PCA (for dense datasets), all possible components are plotted. If the underlying estimator is TruncatedSVD (for sparse datasets), it only shows the selected components. The star marks the number of components selected by the user. This plot is available only when feature selection was applied with strategy="pca".

Parameterstitle: str, dict or None, default=None
Title for the plot.

legend: str, dict or None, default=None
Does nothing. Implemented for continuity of the API.

figsize: tuple, default=(900, 600)
Figure's size in pixels, format as (x, y).

filename: str or None, default=None
Save the plot using this name. Use "auto" for automatic naming. The type of the file depends on the provided name (.html, .png, .pdf, etc...). If filename has no file type, the plot is saved as html. If None, the plot is not saved.

display: bool or None, default=True
Whether to render the plot. If None, it returns the figure.

Returnsgo.Figure or None
Plot object. Only returned if display=None.



method plot_rfecv(title=None, legend=None, figsize=(900, 600), filename=None, display=True)[source]
Plot the rfecv results.

Plot the scores obtained by the estimator fitted on every subset of the dataset. Only available when feature selection was applied with strategy="rfecv".

Parameterstitle: str, dict or None, default=None
Title for the plot.

legend: str, dict or None, default=None
Legend for the plot. See the user guide for an extended description of the choices.

  • If None: No legend is shown.
  • If str: Location where to show the legend.
  • If dict: Legend configuration.

figsize: tuple, default=(900, 600)
Figure's size in pixels, format as (x, y).

filename: str or None, default=None
Save the plot using this name. Use "auto" for automatic naming. The type of the file depends on the provided name (.html, .png, .pdf, etc...). If filename has no file type, the plot is saved as html. If None, the plot is not saved.

display: bool or None, default=True
Whether to render the plot. If None, it returns the figure.

Returnsgo.Figure or None
Plot object. Only returned if display=None.



method reset_aesthetics()[source]
Reset the plot aesthetics to their default values.



method save(filename="auto", save_data=True)[source]
Save the instance to a pickle file.

Parametersfilename: str, default="auto"
Name of the file. Use "auto" for automatic naming.

save_data: bool, default=True
Whether to save the dataset with the instance. This parameter is ignored if the method is not called from atom. If False, add the data to the load method.



method set_params(**params)[source]
Set the parameters of this estimator.

Parameters**params : dict
Estimator parameters.

Returnsself : estimator instance
Estimator instance.



method transform(X, y=None)[source]
Transform the data.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features).

y: int, str, sequence, dataframe-like or None, default=None
Does nothing. Implemented for continuity of the API.

Returnsdataframe
Transformed feature set.



method update_layout(dict1=None, overwrite=False, **kwargs)[source]
Update the properties of the plot's layout.

This recursively updates the structure of the original layout with the values in the input dict / keyword arguments.

Parametersdict1: dict or None, default=None
Dictionary of properties to be updated.

overwrite: bool, default=False
If True, overwrite existing properties. If False, apply updates to existing properties recursively, preserving existing properties that are not specified in the update operation.

**kwargs
Keyword/value pair of properties to be updated.