Skip to content

Tokenizer


class atom.nlp.Tokenizer(bigram_freq=None, trigram_freq=None, quadgram_freq=None, verbose=0, logger=None)[source]
Tokenize the corpus.

Convert documents into sequences of words. Additionally, create n-grams (represented by words united with underscores, e.g. "New_York") based on their frequency in the corpus. The transformations are applied on the column named corpus. If there is no column with that name, an exception is raised.

This class can be accessed from atom through the tokenize method. Read more in the user guide.

Parametersbigram_freq: int, float or None, default=None
Frequency threshold for bigram creation.

  • If None: Don't create any bigrams.
  • If int: Minimum number of occurrences to make a bigram.
  • If float: Minimum frequency fraction to make a bigram.

trigram_freq: int, float or None, default=None
Frequency threshold for trigram creation.

  • If None: Don't create any trigrams.
  • If int: Minimum number of occurrences to make a trigram.
  • If float: Minimum frequency fraction to make a trigram.

quadgram_freq: int, float or None, default=None
Frequency threshold for quadgram creation.

  • If None: Don't create any quadgrams.
  • If int: Minimum number of occurrences to make a quadgram.
  • If float: Minimum frequency fraction to make a quadgram.

verbose: int, default=0
Verbosity level of the class. Choose from:

  • 0 to not print anything.
  • 1 to print basic information.
  • 2 to print detailed information.

logger: str, Logger or None, default=None

  • If None: Doesn't save a logging file.
  • If str: Name of the log file. Use "auto" for automatic naming.
  • Else: Python logging.Logger instance.

Attributesbigrams: pd.DataFrame
Created bigrams and their frequencies.

trigrams: pd.DataFrame
Created trigrams and their frequencies.

quadgrams: pd.DataFrame
Created quadgrams and their frequencies.


See Also

TextCleaner

Applies standard text cleaning to the corpus.

TextNormalizer

Normalize the corpus.

Vectorizer

Vectorize text data.


Example

>>> from atom import ATOMClassifier

>>> X = [
...    ["I àm in ne'w york"],
...    ["New york is nice"],
...    ["new york"],
...    ["hi there this is a test!"],
...    ["another line..."],
...    ["new york is larger than washington"],
...    ["running the test"],
...    ["this is a test"],
... ]
>>> y = [1, 0, 0, 1, 1, 1, 0, 0]

>>> atom = ATOMClassifier(X, y)
>>> print(atom.dataset)

                               corpus  target
0                            new york       0
1  new york is larger than washington       1
2                    New york is nice       0
3                   I àm in ne'w york       1
4                      this is a test       0
5                     another line...       1
6                    running the test       0
7            hi there this is a test!       1

>>> atom.tokenize(verbose=2)

Fitting Tokenizer...
Tokenizing the corpus...

>>> print(atom.dataset)

                                      corpus  target
0                                [new, york]       0
1  [new, york, is, larger, than, washington]       1
2                      [New, york, is, nice]       0
3                [I, àm, in, ne, ', w, york]       1
4                        [this, is, a, test]       0
5                       [another, line, ...]       1
6                       [running, the, test]       0
7          [hi, there, this, is, a, test, !]       1
>>> from atom.nlp import Tokenizer

>>> X = [
...    ["I àm in ne'w york"],
...    ["New york is nice"],
...    ["new york"],
...    ["hi there this is a test!"],
...    ["another line..."],
...    ["new york is larger than washington"],
...    ["running the test"],
...    ["this is a test"],
... ]
>>> y = [1, 0, 0, 1, 1, 1, 0, 0]

>>> tokenizer = Tokenizer(bigram_freq=2, verbose=2)
>>> X = tokenizer.transform(X)

Fitting Tokenizer...
Tokenizing the corpus...
 --> Creating 5 bigrams on 10 locations.

>>> print(X)

                                     corpus
0               [I, àm, in, ne, ', w, york]
1                      [New, york_is, nice]
2                                [new_york]
3           [hi, there, this_is, a_test, !]
4                      [another, line, ...]
5  [new, york_is, larger, than, washington]
6                      [running, the, test]
7                         [this_is, a_test]


Methods

fitDoes nothing.
fit_transformFit to data, then transform it.
get_paramsGet parameters for this estimator.
inverse_transformDoes nothing.
logPrint message and save to log file.
saveSave the instance to a pickle file.
set_paramsSet the parameters of this estimator.
transformTokenize the text.


method fit(X=None, y=None, **fit_params)[source]
Does nothing.

Implemented for continuity of the API.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

**fit_params
Additional keyword arguments for the fit method.

Returnsself
Estimator instance.



method fit_transform(X=None, y=None, **fit_params)[source]
Fit to data, then transform it.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

**fit_params
Additional keyword arguments for the fit method.

Returnsdataframe
Transformed feature set. Only returned if provided.

series
Transformed target column. Only returned if provided.



method get_params(deep=True)[source]
Get parameters for this estimator.

Parametersdeep : bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returnsparams : dict
Parameter names mapped to their values.



method inverse_transform(X=None, y=None)[source]
Does nothing.

ParametersX: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: int, str, sequence, dataframe-like or None, default=None
Target column corresponding to X.

  • If None: y is ignored.
  • If int: Position of the target column in X.
  • If str: Name of the target column in X.
  • If sequence: Target column with shape=(n_samples,) or sequence of column names or positions for multioutput tasks.
  • If dataframe-like: Target columns with shape=(n_samples, n_targets) for multioutput tasks.

Returnsdataframe
Transformed feature set. Only returned if provided.

series
Transformed target column. Only returned if provided.



method log(msg, level=0, severity="info")[source]
Print message and save to log file.

Parametersmsg: int, float or str
Message to save to the logger and print to stdout.

level: int, default=0
Minimum verbosity level to print the message.

severity: str, default="info"
Severity level of the message. Choose from: debug, info, warning, error, critical.



method save(filename="auto", save_data=True)[source]
Save the instance to a pickle file.

Parametersfilename: str, default="auto"
Name of the file. Use "auto" for automatic naming.

save_data: bool, default=True
Whether to save the dataset with the instance. This parameter is ignored if the method is not called from atom. If False, add the data to the load method.



method set_params(**params)[source]
Set the parameters of this estimator.

Parameters**params : dict
Estimator parameters.

Returnsself : estimator instance
Estimator instance.



method transform(X, y=None)[source]
Tokenize the text.

ParametersX: dataframe-like
Feature set with shape=(n_samples, n_features). If X is not a dataframe, it should be composed of a single feature containing the text documents.

y: int, str, sequence, dataframe-like or None, default=None
Does nothing. Implemented for continuity of the API.

Returnsdataframe
Transformed corpus.