Skip to content

TextCleaner


class atom.nlp.TextCleaner(decode=True, lower_case=True, drop_email=True, regex_email=None, drop_url=True, regex_url=None, drop_html=True, regex_html=None, drop_emoji=True, regex_emoji=None, drop_number=True, regex_number=None, drop_punctuation=True, verbose=0, logger=None)[source]
Applies standard text cleaning to the corpus.

Transformations include normalizing characters and dropping noise from the text (emails, HTML tags, URLs, etc...). The transformations are applied on the column named corpus, in the same order the parameters are presented. If there is no column with that name, an exception is raised.

This class can be accessed from atom through the textclean method. Read more in the user guide.

Parametersdecode: bool, default=True
Whether to decode unicode characters to their ascii representations.

lower_case: bool, default=True
Whether to convert all characters to lower case.

drop_email: bool, default=True
Whether to drop email addresses from the text.

regex_email: str, default=None
Regex used to search for email addresses. If None, it uses r"[\w.-]+@[\w-]+\.[\w.-]+".

drop_url: bool, default=True
Whether to drop URL links from the text.

regex_url: str, default=None
Regex used to search for URLs. If None, it uses r"https?://\S+|www\.\S+".

drop_html: bool, default=True
Whether to drop HTML tags from the text. This option is particularly useful if the data was scraped from a website.

regex_html: str, default=None
Regex used to search for html tags. If None, it uses r"<.*?>".

drop_emoji: bool, default=True
Whether to drop emojis from the text.

regex_emoji: str, default=None
Regex used to search for emojis. If None, it uses r":[a-z_]+:".

drop_number: bool, default=True
Whether to drop numbers from the text.

regex_number: str, default=None
Regex used to search for numbers. If None, it uses r"\d+". Note that numbers adjacent to letters are not removed.

drop_punctuation: bool, default=True
Whether to drop punctuations from the text. Characters considered punctuation are !"#$%&'()*+,-./:;<=>?@[\]^_~`.

verbose: int, default=0
Verbosity level of the class. Choose from:

  • 0 to not print anything.
  • 1 to print basic information.
  • 2 to print detailed information.

logger: str, Logger or None, default=None

  • If None: Doesn't save a logging file.
  • If str: Name of the log file. Use "auto" for automatic naming.
  • Else: Python logging.Logger instance.

Attributesdrops: pd.DataFrame
Encountered regex matches. The row indices correspond to the document index from which the occurrence was dropped.


See Also

TextNormalizer

Normalize the corpus.

Tokenizer

Tokenize the corpus.

Vectorizer

Vectorize text data.


Example

>>> from atom import ATOMClassifier
>>> from sklearn.datasets import fetch_20newsgroups

>>> X, y = fetch_20newsgroups(
...     return_X_y=True,
...     categories=[
...         'alt.atheism',
...         'sci.med',
...         'comp.windows.x',
...     ],
...     shuffle=True,
...     random_state=1,
... )
>>> X = np.array(X).reshape(-1, 1)

>>> atom = ATOMClassifier(X, y)
>>> print(atom.dataset)

0     From: thssjxy@iitmax.iit.edu (Smile) Subject:...       2
1     From: nancyo@fraser.sfu.ca (Nancy Patricia O'C...       0
2     From: beck@irzr17.inf.tu-dresden.de (Andre Bec...       1
3     From: keith@cco.caltech.edu (Keith Allan Schne...       0
4     From: strom@Watson.Ibm.Com (Rob Strom) Subjec...       0
                                                 ...     ...
2841  From: dreitman@oregon.uoregon.edu (Daniel R. R...       3
2842  From: ethan@cs.columbia.edu (Ethan Solomita) ...       1
2843  From: r0506048@cml3 (Chun-Hung Lin) Subject: ...       1
2844  From: eshneken@ux4.cso.uiuc.edu (Edward A Shne...       2
2845  From: ibeshir@nyx.cs.du.edu (Ibrahim) Subject...       2

[2846 rows x 2 columns]

>>> atom.textclean(verbose=2)

Fitting TextCleaner...
Cleaning the corpus...
 --> Decoding unicode characters to ascii.
 --> Converting text to lower case.
 --> Dropping 10012 emails from 2830 documents.
 --> Dropping 0 URL links from 0 documents.
 --> Dropping 2214 HTML tags from 1304 documents.
 --> Dropping 2 emojis from 1 documents.
 --> Dropping 31222 numbers from 2843 documents.
 --> Dropping punctuation from the text.

>>> print(atom.dataset)

                                                corpus  target
0     from  smile subject forsale   used guitar amp...       2
1     from  nancy patricia oconnor subject re amusi...       0
2     from  andre beck subject re animation with xp...       1
3     from  keith allan schneider subject re moralt...       0
4     from  rob strom subject re socmotss et al pri...       0
                                                 ...     ...
2841  from  daniel r reitman attorney to be subject...       3
2842  from  ethan solomita subject forcing a window...       1
2843  from r0506048cml3 chunhung lin subject re xma...       1
2844  from  edward a shnekendorf subject airline ti...       2
2845  from  ibrahim subject terminal for sale orga...       2

[2846 rows x 2 columns]
>>> from atom.nlp import TextCleaner
>>> from sklearn.datasets import fetch_20newsgroups

>>> X, y = fetch_20newsgroups(
...     return_X_y=True,
...     categories=[
...         'alt.atheism',
...         'sci.med',
...         'comp.windows.x',
...     ],
...     shuffle=True,
...     random_state=1,
... )
>>> X = np.array(X).reshape(-1, 1)

>>> textcleaner = TextCleaner(verbose=2)
>>> X = textcleaner.transform(X)

Cleaning the corpus...
 --> Decoding unicode characters to ascii.
 --> Converting text to lower case.
 --> Dropping 10012 emails from 2830 documents.
 --> Dropping 0 URL links from 0 documents.
 --> Dropping 2214 HTML tags from 1304 documents.
 --> Dropping 2 emojis from 1 documents.
 --> Dropping 31222 numbers from 2843 documents.
 --> Dropping punctuation from the text.

>>> print(X)

                                                 corpus
0     from donald mackie  subject re barbecued food...
1     from  david stockton subject re krillean phot...
2     from  julia miller subject posix message cata...
3     from   subject re yet more rushdie re islamic...
4     from  joseph a muller subject jfk autograph f...
                                                 ...
2841  from  joel reymont subject motif maling list\...
2842  from  daniel paul checkman subject re is msg ...
2843  from  ad absurdum per aspera subject re its a...
2844  from  ralf subject items for sale organizati...
2845  from  walter g seefeld subject klipsch kg1 sp...

[2846 rows x 1 columns]


Methods

fitDoes nothing.
fit_transformFit to data, then transform it.
get_paramsGet parameters for this estimator.
inverse_transformDoes nothing.
logPrint message and save to log file.
saveSave the instance to a pickle file.
set_paramsSet the parameters of this estimator.
transformApply the transformations to the data.


method fit(X=None, y=None, **fit_params)[source]
Does nothing.

Implemented for continuity of the API.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

**fit_params
Additional keyword arguments for the fit method.

Returnsself
Estimator instance.



method fit_transform(X=None, y=None, **fit_params)[source]
Fit to data, then transform it.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

**fit_params
Additional keyword arguments for the fit method.

Returnsdataframe
Transformed feature set. Only returned if provided.

series
Transformed target column. Only returned if provided.



method get_params(deep=True)[source]
Get parameters for this estimator.

Parametersdeep : bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returnsparams : dict
Parameter names mapped to their values.



method inverse_transform(X=None, y=None)[source]
Does nothing.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

Returnsdataframe
Transformed feature set. Only returned if provided.

series
Transformed target column. Only returned if provided.



method log(msg, level=0, severity="info")[source]
Print message and save to log file.

Parametersmsg: int, float or str
Message to save to the logger and print to stdout.

level: int, default=0
Minimum verbosity level to print the message.

severity: str, default="info"
Severity level of the message. Choose from: debug, info, warning, error, critical.



method save(filename="auto", save_data=True)[source]
Save the instance to a pickle file.

Parametersfilename: str, default="auto"
Name of the file. Use "auto" for automatic naming.

save_data: bool, default=True
Whether to save the dataset with the instance. This parameter is ignored if the method is not called from atom. If False, add the data to the load method.



method set_params(**params)[source]
Set the parameters of this estimator.

Parameters**params : dict
Estimator parameters.

Returnsself : estimator instance
Estimator instance.



method transform(X, y=None)[source]
Apply the transformations to the data.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features). If X is not a dataframe, it should be composed of a single feature containing the text documents.

y: int, str, sequence, dataframe-like or None, default=None
Does nothing. Implemented for continuity of the API.

Returnsdataframe
Transformed corpus.