Skip to content

TextCleaner


class atom.nlp.TextCleaner(decode=True, lower_case=True, drop_email=True, regex_email=None, drop_url=True, regex_url=None, drop_html=True, regex_html=None, drop_emoji=True, regex_emoji=None, drop_number=True, regex_number=None, drop_punctuation=True, verbose=0)[source]

Applies standard text cleaning to the corpus.

Transformations include normalizing characters and dropping noise from the text (emails, HTML tags, URLs, etc...). The transformations are applied on the column named corpus, in the same order the parameters are presented. If there is no column with that name, an exception is raised.

This class can be accessed from atom through the textclean method. Read more in the user guide.

Parameters decode: bool, default=True
Whether to decode unicode characters to their ascii representations.

lower_case: bool, default=True
Whether to convert all characters to lower case.

drop_email: bool, default=True
Whether to drop email addresses from the text.

regex_email: str, default=None
Regex used to search for email addresses. If None, it uses r"[\w.-]+@[\w-]+\.[\w.-]+".

drop_url: bool, default=True
Whether to drop URL links from the text.

regex_url: str, default=None
Regex used to search for URLs. If None, it uses r"https?://\S+|www\.\S+".

drop_html: bool, default=True
Whether to drop HTML tags from the text. This option is particularly useful if the data was scraped from a website.

regex_html: str, default=None
Regex used to search for html tags. If None, it uses r"<.*?>".

drop_emoji: bool, default=True
Whether to drop emojis from the text.

regex_emoji: str, default=None
Regex used to search for emojis. If None, it uses r":[a-z_]+:".

drop_number: bool, default=True
Whether to drop numbers from the text.

regex_number: str, default=None
Regex used to search for numbers. If None, it uses r"\b\d+\b". Note that numbers adjacent to letters are not removed.

drop_punctuation: bool, default=True
Whether to drop punctuations from the text. Characters considered punctuation are !"#$%&'()*+,-./:;<=>?@[\]^_~`.

verbose: int, default=0
Verbosity level of the class. Choose from:

  • 0 to not print anything.
  • 1 to print basic information.
  • 2 to print detailed information.


See Also

TextNormalizer

Normalize the corpus.

Tokenizer

Tokenize the corpus.

Vectorizer

Vectorize text data.


Example

>>> import numpy as np
>>> from atom import ATOMClassifier
>>> from sklearn.datasets import fetch_20newsgroups

>>> X, y = fetch_20newsgroups(
...     return_X_y=True,
...     categories=["alt.atheism", "sci.med", "comp.windows.x"],
...     shuffle=True,
...     random_state=1,
... )
>>> X = np.array(X).reshape(-1, 1)

>>> atom = ATOMClassifier(X, y, random_state=1)
>>> print(atom.dataset)

                                                 corpus  target
0     From: fabian@vivian.w.open.de (Fabian Hoppe)\n...       1
1     From: nyeda@cnsvax.uwec.edu (David Nye)\nSubje...       0
2     From: urathi@net4.ICS.UCI.EDU (Unmesh Rathi)\n...       1
3     From: inoue@crd.yokogawa.co.jp (Inoue Takeshi)...       1
4     From: sandvik@newton.apple.com (Kent Sandvik)\...       0
...                                                 ...     ...
1662  From: kutluk@ccl.umist.ac.uk (Kutluk Ozguven)\...       0
1663  From: dmp1@ukc.ac.uk (D.M.Procida)\nSubject: R...       2
1664  From: tdunbar@vtaix.cc.vt.edu (Thomas Dunbar)\...       1
1665  From: dmp@fig.citib.com (Donna M. Paino)\nSubj...       2
1666  From: cdm@pmafire.inel.gov (Dale Cook)\nSubjec...       2

[1667 rows x 2 columns]

>>> atom.textclean(verbose=2)

Fitting TextCleaner...
Cleaning the corpus...
 --> Decoding unicode characters to ascii.
 --> Converting text to lower case.
 --> Dropping emails from documents.
 --> Dropping URL links from documents.
 --> Dropping HTML tags from documents.
 --> Dropping emojis from documents.
 --> Dropping numbers from documents.
 --> Dropping punctuation from the text.

>>> print(atom.dataset)

                                                 corpus  target
0     from  fabian hoppe\nsubject searching cadsoftw...       1
1     from  david nye\nsubject re after  years can w...       0
2     from  unmesh rathi\nsubject motif and intervie...       1
3     from  inoue takeshi\nsubject how to see charac...       1
4     from  kent sandvik\nsubject re slavery was re ...       0
...                                                 ...     ...
1662  from  kutluk ozguven\nsubject re jewish settle...       0
1663  from  dmprocida\nsubject re homeopathy a respe...       2
1664  from  thomas dunbar\nsubject re x toolkits\nsu...       1
1665  from  donna m paino\nsubject psoriatic arthrit...       2
1666  from  dale cook\nsubject re morbus meniere  is...       2

[1667 rows x 2 columns]
>>> import numpy as np
>>> from atom.nlp import TextCleaner
>>> from sklearn.datasets import fetch_20newsgroups

>>> X, y = fetch_20newsgroups(
...     return_X_y=True,
...     categories=["alt.atheism", "sci.med", "comp.windows.x"],
...     shuffle=True,
...     random_state=1,
... )
>>> X = np.array(X).reshape(-1, 1)

>>> textcleaner = TextCleaner(verbose=2)
>>> X = textcleaner.transform(X)

Cleaning the corpus...
 --> Decoding unicode characters to ascii.
 --> Converting text to lower case.
 --> Dropping emails from documents.
 --> Dropping URL links from documents.
 --> Dropping HTML tags from documents.
 --> Dropping emojis from documents.
 --> Dropping numbers from documents.
 --> Dropping punctuation from the text.

>>> print(X)

                                                 corpus
0     from  mark a deloura\nsubject looking for x wi...
1     from  der mouse\nsubject re creating  bit wind...
2     from  keith m ryan\nsubject re where are they ...
3     from  steven grimm\nsubject re opinions on all...
4     from  peter kaminski\nsubject re krillean phot...
...                                                 ...
1662  from donald mackie \nsubject re seeking advice...
1663  from  gordon banks\nsubject re update help was...
1664  from  keith m ryan\nsubject re political athei...
1665  from  benedikt rosenau\nsubject re biblical ra...
1666  from derrick j brashear \nsubject mouseless op...

[1667 rows x 1 columns]


Methods

fitDo nothing.
fit_transformFit to data, then transform it.
get_feature_names_outGet output feature names for transformation.
get_paramsGet parameters for this estimator.
inverse_transformDo nothing.
set_outputSet output container.
set_paramsSet the parameters of this estimator.
transformApply the transformations to the data.


method fit(X=None, y=None, **fit_params)[source]

Do nothing.

Implemented for continuity of the API.

Parameters X: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: sequence, dataframe-like or None, default=None
Target column(s) corresponding to X. If None, y is ignored.

**fit_params
Additional keyword arguments for the fit method.

Returns self
Estimator instance.



method fit_transform(X=None, y=None, **fit_params)[source]

Fit to data, then transform it.

Parameters X: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: sequence, dataframe-like or None, default=None
Target column(s) corresponding to X. If None, y is ignored.

**fit_params
Additional keyword arguments for the fit method.

Returns dataframe
Transformed feature set. Only returned if provided.

series or dataframe
Transformed target column. Only returned if provided.



method get_feature_names_out(input_features=None)[source]

Get output feature names for transformation.

Parameters input_features : array-like of str or None, default=None
Input features.

  • If input_features is None, then feature_names_in_ is used as feature names in. If feature_names_in_ is not defined, then the following input feature names are generated: ["x0", "x1", ..., "x(n_features_in_ - 1)"].
  • If input_features is an array-like, then input_features must match feature_names_in_ if feature_names_in_ is defined.

Returns feature_names_out : ndarray of str objects
Same as input features.



method get_params(deep=True)[source]

Get parameters for this estimator.

Parameters deep : bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns params : dict
Parameter names mapped to their values.



method inverse_transform(X=None, y=None, **fit_params)[source]

Do nothing.

Returns the input unchanged. Implemented for continuity of the API.

Parameters X: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: sequence, dataframe-like or None, default=None
Target column(s) corresponding to X. If None, y is ignored.

Returns dataframe
Feature set. Only returned if provided.

series or dataframe
Target column(s). Only returned if provided.



method set_output(transform=None)[source]

Set output container.

See sklearn's user guide on how to use the set_output API. See here a description of the choices.

Parameters transform: str or None, default=None
Configure the output of the transform, fit_transform, and inverse_transform method. If None, the configuration is not changed. Choose from:

  • "numpy"
  • "pandas" (default)
  • "pandas-pyarrow"
  • "polars"
  • "polars-lazy"
  • "pyarrow"
  • "modin"
  • "dask"
  • "pyspark"
  • "pyspark-pandas"

Returns Self
Estimator instance.



method set_params(**params)[source]

Set the parameters of this estimator.

Parameters **params : dict
Estimator parameters.

Returns self : estimator instance
Estimator instance.



method transform(X, y=None)[source]

Apply the transformations to the data.

Parameters X: dataframe-like
Feature set with shape=(n_samples, n_features). If X is not a dataframe, it should be composed of a single feature containing the text documents.

y: sequence, dataframe-like or None, default=None
Do nothing. Implemented for continuity of the API.

Returns dataframe
Transformed corpus.