Skip to content

Tokenizer


class atom.nlp.Tokenizer(bigram_freq=None, trigram_freq=None, quadgram_freq=None, verbose=0)[source]

Tokenize the corpus.

Convert documents into sequences of words. Additionally, create n-grams (represented by words united with underscores, e.g., "New_York") based on their frequency in the corpus. The transformations are applied on the column named corpus. If there is no column with that name, an exception is raised.

This class can be accessed from atom through the tokenize method. Read more in the user guide.

Parameters bigram_freq: int, float or None, default=None
Frequency threshold for bigram creation.

  • If None: Don't create any bigrams.
  • If int: Minimum number of occurrences to make a bigram.
  • If float: Minimum frequency fraction to make a bigram.

trigram_freq: int, float or None, default=None
Frequency threshold for trigram creation.

  • If None: Don't create any trigrams.
  • If int: Minimum number of occurrences to make a trigram.
  • If float: Minimum frequency fraction to make a trigram.

quadgram_freq: int, float or None, default=None
Frequency threshold for quadgram creation.

  • If None: Don't create any quadgrams.
  • If int: Minimum number of occurrences to make a quadgram.
  • If float: Minimum frequency fraction to make a quadgram.

verbose: int, default=0
Verbosity level of the class. Choose from:

  • 0 to not print anything.
  • 1 to print basic information.
  • 2 to print detailed information.

Attributes bigrams_: pd.DataFrame
Created bigrams and their frequencies.

trigrams_: pd.DataFrame
Created trigrams and their frequencies.

quadgrams_: pd.DataFrame
Created quadgrams and their frequencies.

feature_names_in_: np.ndarray
Names of features seen during fit.

n_features_in_: int
Number of features seen during fit.


See Also

TextCleaner

Applies standard text cleaning to the corpus.

TextNormalizer

Normalize the corpus.

Vectorizer

Vectorize text data.


Example

>>> from atom import ATOMClassifier

>>> X = [
...    ["I àm in ne'w york"],
...    ["New york is nice"],
...    ["new york"],
...    ["hi there this is a test!"],
...    ["another line..."],
...    ["new york is larger than washington"],
...    ["running the test"],
...    ["this is a test"],
... ]
>>> y = [1, 0, 0, 1, 1, 1, 0, 0]

>>> atom = ATOMClassifier(X, y, test_size=2, random_state=1)
>>> print(atom.dataset)

                               corpus  target
0                            new york       0
1                     another line...       1
2                    New york is nice       0
3  new york is larger than washington       1
4                    running the test       0
5                   I àm in ne'w york       1
6                      this is a test       0
7            hi there this is a test!       1

>>> atom.tokenize(verbose=2)

Fitting Tokenizer...
Tokenizing the corpus...

>>> print(atom.dataset)

                                      corpus  target
0                                [new, york]       0
1                       [another, line, ...]       1
2                      [New, york, is, nice]       0
3  [new, york, is, larger, than, washington]       1
4                       [running, the, test]       0
5                [I, àm, in, ne, ', w, york]       1
6                        [this, is, a, test]       0
7          [hi, there, this, is, a, test, !]       1
>>> from atom.nlp import Tokenizer

>>> X = [
...    ["I àm in ne'w york"],
...    ["New york is nice"],
...    ["new york"],
...    ["hi there this is a test!"],
...    ["another line..."],
...    ["new york is larger than washington"],
...    ["running the test"],
...    ["this is a test"],
... ]

>>> tokenizer = Tokenizer(bigram_freq=2, verbose=2)
>>> X = tokenizer.transform(X)

Tokenizing the corpus...
 --> Creating 5 bigrams on 10 locations.

>>> print(X)

                                     corpus
0               [I, àm, in, ne, ', w, york]
1                      [New, york_is, nice]
2                                [new_york]
3           [hi, there, this_is, a_test, !]
4                      [another, line, ...]
5  [new, york_is, larger, than, washington]
6                      [running, the, test]
7                         [this_is, a_test]


Methods

fitDo nothing.
fit_transformFit to data, then transform it.
get_feature_names_outGet output feature names for transformation.
get_paramsGet parameters for this estimator.
inverse_transformDo nothing.
set_outputSet output container.
set_paramsSet the parameters of this estimator.
transformTokenize the text.


method fit(X=None, y=None, **fit_params)[source]

Do nothing.

Implemented for continuity of the API.

Parameters X: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: sequence, dataframe-like or None, default=None
Target column(s) corresponding to X. If None, y is ignored.

**fit_params
Additional keyword arguments for the fit method.

Returns self
Estimator instance.



method fit_transform(X=None, y=None, **fit_params)[source]

Fit to data, then transform it.

Parameters X: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: sequence, dataframe-like or None, default=None
Target column(s) corresponding to X. If None, y is ignored.

**fit_params
Additional keyword arguments for the fit method.

Returns dataframe
Transformed feature set. Only returned if provided.

series or dataframe
Transformed target column. Only returned if provided.



method get_feature_names_out(input_features=None)[source]

Get output feature names for transformation.

Parameters input_features : array-like of str or None, default=None
Input features.

  • If input_features is None, then feature_names_in_ is used as feature names in. If feature_names_in_ is not defined, then the following input feature names are generated: ["x0", "x1", ..., "x(n_features_in_ - 1)"].
  • If input_features is an array-like, then input_features must match feature_names_in_ if feature_names_in_ is defined.

Returns feature_names_out : ndarray of str objects
Same as input features.



method get_params(deep=True)[source]

Get parameters for this estimator.

Parameters deep : bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns params : dict
Parameter names mapped to their values.



method inverse_transform(X=None, y=None, **fit_params)[source]

Do nothing.

Returns the input unchanged. Implemented for continuity of the API.

Parameters X: dataframe-like or None, default=None
Feature set with shape=(n_samples, n_features). If None, X is ignored.

y: sequence, dataframe-like or None, default=None
Target column(s) corresponding to X. If None, y is ignored.

Returns dataframe
Feature set. Only returned if provided.

series or dataframe
Target column(s). Only returned if provided.



method set_output(transform=None)[source]

Set output container.

See sklearn's user guide on how to use the set_output API. See here a description of the choices.

Parameters transform: str or None, default=None
Configure the output of the transform, fit_transform, and inverse_transform method. If None, the configuration is not changed. Choose from:

  • "numpy"
  • "pandas" (default)
  • "pandas-pyarrow"
  • "polars"
  • "polars-lazy"
  • "pyarrow"
  • "modin"
  • "dask"
  • "pyspark"
  • "pyspark-pandas"

Returns Self
Estimator instance.



method set_params(**params)[source]

Set the parameters of this estimator.

Parameters **params : dict
Estimator parameters.

Returns self : estimator instance
Estimator instance.



method transform(X, y=None)[source]

Tokenize the text.

Parameters X: dataframe-like
Feature set with shape=(n_samples, n_features). If X is not a dataframe, it should be composed of a single feature containing the text documents.

y: sequence, dataframe-like or None, default=None
Do nothing. Implemented for continuity of the API.

Returns dataframe
Transformed corpus.