Skip to content

FeatureExtractor


class atom.feature_engineering.FeatureExtractor(features=('day', 'month', 'year'), fmt=None, encoding_type="ordinal", drop_columns=True, verbose=0, logger=None)[source]
Extract features from datetime columns.

Create new features extracting datetime elements (day, month, year, etc...) from the provided columns. Columns of dtype datetime64 are used as is. Categorical columns that can be successfully converted to a datetime format (less than 30% NaT values after conversion) are also used.

This class can be accessed from atom through the feature_extraction method. Read more in the user guide.

Warning

Decision trees based algorithms build their split rules according to one feature at a time. This means that they will fail to correctly process cyclic features since the sin/cos features should be considered one single coordinate system.

Parametersfeatures: str or sequence, default=("day", "month", "year")
Features to create from the datetime columns. Note that created features with zero variance (e.g. the feature hour in a column that only contains dates) are ignored. Allowed values are datetime attributes from pandas.Series.dt.

fmt: str, sequence or None, default=None
Format (strptime) of the categorical columns that need to be converted to datetime. If sequence, the n-th format corresponds to the n-th categorical column that can be successfully converted. If None, the format is inferred automatically from the first non NaN value. Values that can not be converted are returned as NaT.

encoding_type: str, default="ordinal"
Type of encoding to use. Choose from:

  • "ordinal": Encode features in increasing order.
  • "cyclic": Encode features using sine and cosine to capture their cyclic nature. This approach creates two columns for every feature. Non-cyclic features still use ordinal encoding.

drop_columns: bool, default=True
Whether to drop the original columns after transformation.

verbose: int, default=0
Verbosity level of the class. Choose from:

  • 0 to not print anything.
  • 1 to print basic information.
  • 2 to print detailed information.

logger: str, Logger or None, default=None

  • If None: Doesn't save a logging file.
  • If str: Name of the log file. Use "auto" for automatic naming.
  • Else: Python logging.Logger instance.

Attributesfeature_names_in_: np.array
Names of features seen during fit.

n_features_in_: int
Number of features seen during fit.


See Also

FeatureGenerator

Generate new features.

FeatureGrouper

Extract statistics from similar features.

FeatureSelector

Reduce the number of features in the data.


Example

>>> from atom import ATOMClassifier
>>> from sklearn.datasets import load_breast_cancer

>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)
>>> X["date"] = pd.date_range(start="1/1/2018", periods=len(X))

>>> atom = ATOMClassifier(X, y)
>>> atom.feature_extraction(features=["day"], fmt="%d/%m/%Y", verbose=2)

Extracting datetime features...
 --> Extracting features from column date.
   --> Creating feature date_day.

>>> # Note the date_day column
>>> print(atom.dataset)

     mean radius  mean texture  ...  date_day  target
0         11.300         18.19  ...        31       1
1         16.460         20.11  ...        27       0
2         11.370         18.89  ...        17       1
3          8.598         20.98  ...         3       1
4         12.800         17.46  ...         2       1
..           ...           ...  ...       ...     ...
564       17.060         21.00  ...         2       0
565       11.940         20.76  ...        14       1
566       19.590         25.00  ...        28       0
567       12.360         18.54  ...        18       1
568       18.450         21.91  ...        15       0

[569 rows x 32 columns]
>>> from atom.feature_engineering import FeatureExtractor
>>> from sklearn.datasets import load_breast_cancer

>>> X, _ = load_breast_cancer(return_X_y=True, as_frame=True)
>>> X["date"] = pd.date_range(start="1/1/2018", periods=len(X))

>>> fe = FeatureExtractor(features=["day"], fmt="%Y-%m-%d", verbose=2)
>>> X = fe.transform(X)

Extracting datetime features...
 --> Extracting features from column date.
   --> Creating feature date_day.

>>> # Note the date_day column
>>> print(X)

     mean radius  mean texture  ...  worst fractal dimension  date_day
0          17.99         10.38  ...                  0.11890         1
1          20.57         17.77  ...                  0.08902         2
2          19.69         21.25  ...                  0.08758         3
3          11.42         20.38  ...                  0.17300         4
4          20.29         14.34  ...                  0.07678         5
..           ...           ...  ...                      ...       ...
564        21.56         22.39  ...                  0.07115        19
565        20.13         28.25  ...                  0.06637        20
566        16.60         28.08  ...                  0.07820        21
567        20.60         29.33  ...                  0.12400        22
568         7.76         24.54  ...                  0.07039        23

[569 rows x 31 columns]


Methods

fitDoes nothing.
fit_transformFit to data, then transform it.
get_paramsGet parameters for this estimator.
inverse_transformDoes nothing.
logPrint message and save to log file.
saveSave the instance to a pickle file.
set_paramsSet the parameters of this estimator.
transformExtract the new features.


method fit(X=None, y=None, **fit_params)[source]
Does nothing.

Implemented for continuity of the API.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

**fit_params
Additional keyword arguments for the fit method.

Returnsself
Estimator instance.



method fit_transform(X=None, y=None, **fit_params)[source]
Fit to data, then transform it.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

**fit_params
Additional keyword arguments for the fit method.

Returnsdataframe
Transformed feature set. Only returned if provided.

series
Transformed target column. Only returned if provided.



method get_params(deep=True)[source]
Get parameters for this estimator.

Parametersdeep : bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returnsparams : dict
Parameter names mapped to their values.



method inverse_transform(X=None, y=None)[source]
Does nothing.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

Returnsdataframe
Transformed feature set. Only returned if provided.

series
Transformed target column. Only returned if provided.



method log(msg, level=0, severity="info")[source]
Print message and save to log file.

Parametersmsg: int, float or str
Message to save to the logger and print to stdout.

level: int, default=0
Minimum verbosity level to print the message.

severity: str, default="info"
Severity level of the message. Choose from: debug, info, warning, error, critical.



method save(filename="auto", save_data=True)[source]
Save the instance to a pickle file.

Parametersfilename: str, default="auto"
Name of the file. Use "auto" for automatic naming.

save_data: bool, default=True
Whether to save the dataset with the instance. This parameter is ignored if the method is not called from atom. If False, add the data to the load method.



method set_params(**params)[source]
Set the parameters of this estimator.

Parameters**params : dict
Estimator parameters.

Returnsself : estimator instance
Estimator instance.



method transform(X, y=None)[source]
Extract the new features.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features).

y: int, str, sequence, dataframe-like or None, default=None
Does nothing. Implemented for continuity of the API.

Returnsdataframe
Transformed feature set.