TextNormalizer
Normalize the corpus.
Convert words to a more uniform standard. The transformations
are applied on the column named corpus
, in the same order the
parameters are presented. If there is no column with that name,
an exception is raised. If the provided documents are strings,
words are separated by spaces.
This class can be accessed from atom through the textnormalize method. Read more in the user guide.
Parameters |
stopwords: bool or str, default=True
Whether to remove a predefined dictionary of stopwords.
custom_stopwords: sequence or None, default=None
Custom stopwords to remove from the text.
stem: bool or str, default=False
Whether to apply stemming using SnowballStemmer.
lemmatize: bool, default=True
Whether to apply lemmatization using WordNetLemmatizer.
verbose: int, default=0
Verbosity level of the class. Choose from:
|
Attributes |
feature_names_in_: np.ndarray
Names of features seen during
n_features_in_: intfit .
Number of features seen during fit .
|
See Also
Example
>>> from atom import ATOMClassifier
>>> X = [
... ["I àm in ne'w york"],
... ["New york is nice"],
... ["new york"],
... ["hi there this is a test!"],
... ["another line..."],
... ["new york is larger than washington"],
... ["running the test"],
... ["this is a test"],
... ]
>>> y = [1, 0, 0, 1, 1, 1, 0, 0]
>>> atom = ATOMClassifier(X, y, test_size=2, random_state=1)
>>> print(atom.dataset)
corpus target
0 new york 0
1 another line... 1
2 New york is nice 0
3 new york is larger than washington 1
4 running the test 0
5 I àm in ne'w york 1
6 this is a test 0
7 hi there this is a test! 1
>>> atom.textnormalize(stopwords="english", lemmatize=True, verbose=2)
Fitting TextNormalizer...
Normalizing the corpus...
--> Dropping stopwords.
--> Applying lemmatization.
>>> print(atom.dataset)
corpus target
0 [new, york] 0
1 [another, line...] 1
2 [New, york, nice] 0
3 [new, york, large, washington] 1
4 [run, test] 0
5 [I, àm, ne'w, york] 1
6 [test] 0
7 [hi, test!] 1
>>> from atom.nlp import TextNormalizer
>>> X = [
... ["I àm in ne'w york"],
... ["New york is nice"],
... ["new york"],
... ["hi there this is a test!"],
... ["another line..."],
... ["new york is larger than washington"],
... ["running the test"],
... ["this is a test"],
... ]
>>> textnormalizer = TextNormalizer(
... stopwords="english",
... lemmatize=True,
... verbose=2,
... )
>>> X = textnormalizer.transform(X)
Normalizing the corpus...
--> Dropping stopwords.
--> Applying lemmatization.
>>> print(X)
corpus
0 [I, àm, ne'w, york]
1 [New, york, nice]
2 [new, york]
3 [hi, test!]
4 [another, line...]
5 [new, york, large, washington]
6 [run, test]
7 [test]
Methods
fit | Do nothing. |
fit_transform | Fit to data, then transform it. |
get_feature_names_out | Get output feature names for transformation. |
get_params | Get parameters for this estimator. |
inverse_transform | Do nothing. |
set_output | Set output container. |
set_params | Set the parameters of this estimator. |
transform | Normalize the text. |
Do nothing.
Implemented for continuity of the API.
Fit to data, then transform it.
Get output feature names for transformation.
Get parameters for this estimator.
Parameters |
deep : bool, default=True
If True, will return the parameters for this estimator and
contained subobjects that are estimators.
|
Returns |
params : dict
Parameter names mapped to their values.
|
Do nothing.
Returns the input unchanged. Implemented for continuity of the API.
Set output container.
See sklearn's user guide on how to use the
set_output
API. See here a description
of the choices.
Set the parameters of this estimator.
Parameters |
**params : dict
Estimator parameters.
|
Returns |
self : estimator instance
Estimator instance.
|
Normalize the text.