FeatureGenerator
class atom.feature_engineering.FeatureGenerator(strategy="dfs", n_features=None, operators=None, n_jobs=1, verbose=0, logger=None, random_state=None, **kwargs)[source]
Generate new features.
Create new combinations of existing features to capture the non-linear relations between the original features.
This class can be accessed from atom through the feature_generation method. Read more in the user guide.
Warning
- Using the
div
,log
orsqrt
operators can return new features withinf
orNaN
values. Check the warnings that may pop up or use atom's nans attribute. - When using dfs with
n_jobs>1
, make sure to protect your code withif __name__ == "__main__"
. Featuretools uses dask, which uses python multiprocessing for parallelization. The spawn method on multiprocessing starts a new python process, which requires it to import the __main__ module before it can do its task. - gfg can be slow for very large populations.
Tip
dfs can create many new features and not all of them will be useful. Use the FeatureSelector class to reduce the number of features.
Parameters | strategy: str, default="dfs"
Strategy to crate new features. Choose from:
n_features: int or None, default=None
Maximum number of newly generated features to add to the
dataset. If None, select all created features.
operators: str, sequence or None, default=None
Mathematical operators to apply on the features. None to use
all. Choose from: n_jobs: int, default=1add , sub , mul , div , abs , sqrt ,
log , inv , sin , cos , tan .
Number of cores to use for parallel processing.
verbose: int, default=0
Verbosity level of the class. Choose from:
logger: str, Logger or None, default=None
Seed used by the random number generator. If None, the random
number generator is the **kwargsRandomState used by np.random .
Additional keyword arguments for the SymbolicTransformer
instance. Only for the gfg strategy.
|
Attributes | gfg: SymbolicTransformer
Object used to calculate the genetic features. Only for the
gfg strategy.
genetic_features: pd.DataFrame
Information on the newly created non-linear features. Only for
the gfg strategy. Columns include:
feature_names_in_: np.array
Names of features seen during fit.
n_features_in_: int
Number of features seen during fit.
|
See Also
Extract features from datetime columns.
Extract statistics from similar features.
Reduce the number of features in the data.
Example
>>> from atom import ATOMClassifier
>>> from sklearn.datasets import load_breast_cancer
>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)
>>> atom = ATOMClassifier(X, y)
>>> atom.feature_generation(strategy="dfs", n_features=5, verbose=2)
Fitting FeatureGenerator...
Generating new features...
--> 5 new features were added.
>>> # Note the texture error / worst symmetry column
>>> print(atom.dataset)
mean radius mean texture ... texture error / worst symmetry target
0 15.75 19.22 ... 3.118963 0
1 12.10 17.72 ... 5.418170 1
2 20.16 19.66 ... 2.246481 0
3 12.88 18.22 ... 4.527498 1
4 13.03 18.42 ... 11.786613 1
.. ... ... ... ... ...
564 21.75 20.99 ... 4.772326 0
565 13.64 16.34 ... 3.936061 1
566 10.08 15.11 ... 4.323219 1
567 12.91 16.33 ... 3.004630 1
568 11.60 18.36 ... 2.385047 1
[569 rows x 36 columns]
>>> from atom.feature_engineering import FeatureGenerator
>>> from sklearn.datasets import load_breast_cancer
>>> X, y = load_breast_cancer(return_X_y=True, as_frame=True)
>>> fg = FeatureGenerator(strategy="dfs", n_features=5, verbose=2)
>>> X = fg.fit_transform(X, y)
Fitting FeatureGenerator...
Generating new features...
--> 5 new features were added.
>>> # Note the radius error * worst smoothness column
>>> print(X)
mean radius ... radius error * worst smoothness
0 17.99 ... 0.177609
1 20.57 ... 0.067285
2 19.69 ... 0.107665
3 11.42 ... 0.103977
4 20.29 ... 0.104039
.. ... ... ...
564 21.56 ... 0.165816
565 20.13 ... 0.089257
566 16.60 ... 0.051984
567 20.60 ... 0.119790
568 7.76 ... 0.034698
[569 rows x 35 columns]
Methods
fit | Fit to data. |
fit_transform | Fit to data, then transform it. |
get_params | Get parameters for this estimator. |
inverse_transform | Does nothing. |
log | Print message and save to log file. |
save | Save the instance to a pickle file. |
set_params | Set the parameters of this estimator. |
transform | Generate new features. |
method fit(X, y=None)[source]
Fit to data.
method fit_transform(X=None, y=None, **fit_params)[source]
Fit to data, then transform it.
method get_params(deep=True)[source]
Get parameters for this estimator.
Parameters | deep : bool, default=True
If True, will return the parameters for this estimator and
contained subobjects that are estimators.
|
Returns | params : dict
Parameter names mapped to their values.
|
method inverse_transform(X=None, y=None)[source]
Does nothing.
method log(msg, level=0, severity="info")[source]
Print message and save to log file.
method save(filename="auto", save_data=True)[source]
Save the instance to a pickle file.
Parameters | filename: str, default="auto"
Name of the file. Use "auto" for automatic naming.
save_data: bool, default=True
Whether to save the dataset with the instance. This parameter
is ignored if the method is not called from atom. If False,
add the data to the load method.
|
method set_params(**params)[source]
Set the parameters of this estimator.
Parameters | **params : dict
Estimator parameters.
|
Returns | self : estimator instance
Estimator instance.
|
method transform(X, y=None)[source]
Generate new features.