Skip to content

Tokenizer


class atom.nlp.Tokenizer(bigram_freq=None, trigram_freq=None, quadgram_freq=None, verbose=0, logger=None) [source]

Convert documents into sequences of words. Additionally, create n-grams (represented by words united with underscores, e.g. "New_York") based on their frequency in the corpus. The transformations are applied on the column named Corpus. If there is no column with that name, an exception is raised. This class can be accessed from atom through the tokenize method. Read more in the user guide.

Parameters: bigram_freq: int, float or None, optional (default=None)
Frequency threshold for bigram creation.
  • If None: Don't create any bigrams.
  • If int: Minimum number of occurrences to make a bigram.
  • If float: Minimum frequency fraction to make a bigram.
trigram_freq: int, float or None, optional (default=None)
Frequency threshold for trigram creation.
  • If None: Don't create any trigrams.
  • If int: Minimum number of occurrences to make a trigram.
  • If float: Minimum frequency fraction to make a trigram.
quadgram_freq: int, float or None, optional (default=None)
Frequency threshold for quadgram creation.
  • If None: Don't create any quadgrams.
  • If int: Minimum number of occurrences to make a quadgram.
  • If float: Minimum frequency fraction to make a quadgram.
verbose: int, optional (default=0)
Verbosity level of the class. Possible values are:
  • 0 to not print anything.
  • 1 to print basic information.
  • 2 to print detailed information.
logger: str, Logger or None, optional (default=None)
  • If None: Doesn't save a logging file.
  • If str: Name of the log file. Use "auto" for automatic naming.
  • Else: Python logging.Logger instance.


Attributes

Attributes:

bigrams: pd.DataFrame
Created bigrams and their frequencies.

trigrams: pd.DataFrame
Created trigrams and their frequencies.

quadgrams: pd.DataFrame
Created quadgrams and their frequencies.


Methods

fit_transform Same as transform.
get_params Get parameters for this estimator.
log Write information to the logger and print to stdout.
save Save the instance to a pickle file.
set_params Set the parameters of this estimator.
transform Transform the text.


method fit_transform(X, y=None) [source]

Tokenize the text.

Parameters:

X: dict, list, tuple, np.ndarray or pd.DataFrame
Feature set with shape=(n_samples, n_features). If X is not a pd.DataFrame, it should be composed of a single feature containing the text documents.

y: int, str, sequence or None, optional (default=None)
Does nothing. Implemented for continuity of the API.
Returns:

X: pd.DataFrame
Transformed corpus.


method get_params(deep=True) [source]

Get parameters for this estimator.

Parameters:

deep: bool, optional (default=True)
If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns: params: dict
Dictionary of the parameter names mapped to their values.


method log(msg, level=0) [source]

Write a message to the logger and print it to stdout.

Parameters:

msg: str
Message to write to the logger and print to stdout.

level: int, optional (default=0)
Minimum verbosity level to print the message.


method save(filename="auto") [source]

Save the instance to a pickle file.

Parameters: filename: str, optional (default="auto")
Name of the file. Use "auto" for automatic naming.


method set_params(**params) [source]

Set the parameters of this estimator.

Parameters: **params: dict
Estimator parameters.
Returns: self: Tokenizer
Estimator instance.


method transform(X, y=None) [source]

Tokenize the text.

Parameters:

X: dict, list, tuple, np.ndarray or pd.DataFrame
Feature set with shape=(n_samples, n_features). If X is not a pd.DataFrame, it should be composed of a single feature containing the text documents.

y: int, str, sequence or None, optional (default=None)
Does nothing. Implemented for continuity of the API.
Returns:

X: pd.DataFrame
Transformed corpus.


Example

from atom import ATOMClassifier

atom = ATOMClassifier(X, y)
atom.tokenize(bigram_freq=0.01)
or
from atom.nlp import Tokenizer

tokenizer = Tokenizer(bigram_freq=0.01)
X = tokenizer.transform(X)

Back to top