Skip to content

FeatureGrouper


class atom.feature_engineering.FeatureGrouper(group, name=None, operators=None, drop_columns=True, verbose=0, logger=None)[source]
Extract statistics from similar features.

Replace groups of features with related characteristics with new features that summarize statistical properties of te group. The statistical operators are calculated over every row of the group. The group names and features can be accessed through the groups method.

This class can be accessed from atom through the feature_grouping method. Read more in the user guide.

Tip

Use a regex pattern with the groups parameter to select groups easier, e.g. atom.feature_generation(features="var_.+") to select all features that start with var_.

Parametersgroup: str, slice or sequence
Features that belong to a group. Select them by name, position or regex pattern. A feature can belong to multiple groups. Use a sequence of sequences to define multiple groups.

name: str, sequence or None, default=None
Name of the group. The new features are named combining the operator used and the group's name, e.g. mean(group_1). If specfified, the length should match with the number of groups defined in features. If None, default group names of the form group1, group2, etc... are used.

operators: str, sequence or None, default=None
Statistical operators to apply on the groups. Any operator from numpy or scipy.stats (checked in that order) that is applied on an array can be used. If None, it uses: min, max, mean, median, mode and std.

drop_columns: bool, default=True
Whether to drop the columns in groups after transformation.

verbose: int, default=0
Verbosity level of the class. Choose from:

  • 0 to not print anything.
  • 1 to print basic information.
  • 2 to print detailed information.

logger: str, Logger or None, default=None

  • If None: Doesn't save a logging file.
  • If str: Name of the log file. Use "auto" for automatic naming.
  • Else: Python logging.Logger instance.

Attributesgroups: dict
Names and features of every created group.

feature_names_in_: np.array
Names of features seen during fit.

n_features_in_: int
Number of features seen during fit.


See Also

FeatureExtractor

Extract features from datetime columns.

FeatureGenerator

Generate new features.

FeatureSelector

Reduce the number of features in the data.


Example

>>> from atom import ATOMClassifier
>>> from sklearn.datasets import load_breast_cancer

>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)

>>> atom = ATOMClassifier(X, y)
>>> atom.feature_grouping(group=["mean.+"], name="means", verbose=2)

Fitting FeatureGrouper...
Grouping features...
 --> Group means successfully created.

>>> # Note the mean features are gone and the new std(means) feature
>>> print(atom.dataset)

     radius error  texture error  ...  std(means)  target
0          0.2949         1.6560  ...  137.553584       1
1          0.2351         2.0110  ...   79.830195       1
2          0.4302         2.8780  ...   80.330330       1
3          0.2345         1.2190  ...  151.858455       1
4          0.3511         0.9527  ...  145.769474       1
..            ...            ...  ...         ...     ...
564        0.4866         1.9050  ...  116.749243       1
565        0.5925         0.6863  ...  378.431333       0
566        0.2577         1.0950  ...  141.220243       1
567        0.4615         0.9197  ...  257.903846       0
568        0.5462         1.5110  ...  194.704033       1

[569 rows x 27 columns]
>>> from atom.feature_engineering import FeatureGrouper
>>> from sklearn.datasets import load_breast_cancer

>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)

>>> # Group all features that start with mean
>>> fg = FeatureGrouper(group="mean.+", name="means", verbose=2)
>>> X = fg.transform(X)

Fitting FeatureGrouper...
Grouping features...
 --> Group means successfully created.

>>> # Note the mean features are gone and the new std(means) feature
>>> print(X)

     radius error  texture error  ...  mode(means)  std(means)
0          1.0950         0.9053  ...      0.07871  297.404540
1          0.5435         0.7339  ...      0.05667  393.997131
2          0.7456         0.7869  ...      0.05999  357.203084
3          0.4956         1.1560  ...      0.09744  114.444620
4          0.7572         0.7813  ...      0.05883  385.450556
..            ...            ...  ...          ...         ...
564        1.1760         1.2560  ...      0.05623  439.441252
565        0.7655         2.4630  ...      0.05533  374.274845
566        0.4564         1.0750  ...      0.05302  254.320568
567        0.7260         1.5950  ...      0.07016  375.376476
568        0.3857         1.4280  ...      0.00000   53.739926

[569 rows x 26 columns]


Methods

fitDoes nothing.
fit_transformFit to data, then transform it.
get_paramsGet parameters for this estimator.
inverse_transformDoes nothing.
logPrint message and save to log file.
saveSave the instance to a pickle file.
set_paramsSet the parameters of this estimator.
transformGroup features.


method fit(X=None, y=None, **fit_params)[source]
Does nothing.

Implemented for continuity of the API.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, dict, sequence or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • Else: Array with shape=(n_samples,) to use as target.

**fit_params
Additional keyword arguments for the fit method.

Returnsself
Estimator instance.



method fit_transform(X=None, y=None, **fit_params)[source]
Fit to data, then transform it.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, dict, sequence or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • Else: Array with shape=(n_samples,) to use as target.

**fit_params
Additional keyword arguments for the fit method.

Returnspd.DataFrame
Transformed feature set. Only returned if provided.

pd.Series
Transformed target column. Only returned if provided.



method get_params(deep=True)[source]
Get parameters for this estimator.

Parametersdeep : bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returnsparams : dict
Parameter names mapped to their values.



method inverse_transform(X=None, y=None)[source]
Does nothing.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, dict, sequence or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • Else: Array with shape=(n_samples,) to use as target.

Returnspd.DataFrame
Transformed feature set. Only returned if provided.

pd.Series
Transformed target column. Only returned if provided.



method log(msg, level=0, severity="info")[source]
Print message and save to log file.

Parametersmsg: int, float or str
Message to save to the logger and print to stdout.

level: int, default=0
Minimum verbosity level to print the message.

severity: str, default="info"
Severity level of the message. Choose from: debug, info, warning, error, critical.



method save(filename="auto", save_data=True)[source]
Save the instance to a pickle file.

Parametersfilename: str, default="auto"
Name of the file. Use "auto" for automatic naming.

save_data: bool, default=True
Whether to save the dataset with the instance. This parameter is ignored if the method is not called from atom. If False, remember to add the data to ATOMLoader when loading the file.



method set_params(**params)[source]
Set the parameters of this estimator.

Parameters**params : dict
Estimator parameters.

Returnsself : estimator instance
Estimator instance.



method transform(X, y=None)[source]
Group features.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features).

y: int, str, dict, sequence or None, default=None
Does nothing. Implemented for continuity of the API.

Returnspd.DataFrame
Transformed feature set.