Example: NLP¶
This example shows how to use ATOM to quickly go from raw text data to model predictions.
Import the 20 newsgroups text dataset from sklearn.datasets. The dataset comprises around 18000 articles on 20 topics. The goal is to predict the topic of every article.
Load the data¶
In [1]:
Copied!
import numpy as np
from atom import ATOMClassifier
from sklearn.datasets import fetch_20newsgroups
import numpy as np
from atom import ATOMClassifier
from sklearn.datasets import fetch_20newsgroups
In [2]:
Copied!
# Use only a subset of the available topics for faster processing
X_text, y_text = fetch_20newsgroups(
return_X_y=True,
categories=[
'sci.med',
'comp.windows.x',
'misc.forsale',
'rec.autos',
],
shuffle=True,
random_state=1,
)
X_text = np.array(X_text).reshape(-1, 1)
# Use only a subset of the available topics for faster processing
X_text, y_text = fetch_20newsgroups(
return_X_y=True,
categories=[
'sci.med',
'comp.windows.x',
'misc.forsale',
'rec.autos',
],
shuffle=True,
random_state=1,
)
X_text = np.array(X_text).reshape(-1, 1)
Run the pipeline¶
In [3]:
Copied!
atom = ATOMClassifier(X_text, y_text, index=True, test_size=0.3, verbose=2, random_state=1)
atom = ATOMClassifier(X_text, y_text, index=True, test_size=0.3, verbose=2, random_state=1)
<< ================== ATOM ================== >> Configuration ==================== >> Algorithm task: Multiclass classification. Dataset stats ==================== >> Shape: (2366, 2) Train set size: 1657 Test set size: 709 ------------------------------------- Memory: 122.87 kB Scaled: False Categorical features: 1 (100.0%)
In [4]:
Copied!
atom.dataset # Note that the feature is automatically named 'corpus'
atom.dataset # Note that the feature is automatically named 'corpus'
Out[4]:
corpus | target | |
---|---|---|
1731 | From: rlm@helen.surfcty.com (Robert L. McMilli... | 0 |
1496 | From: carl@SOL1.GPS.CALTECH.EDU (Carl J Lydick... | 3 |
1290 | From: thssjxy@iitmax.iit.edu (Smile)\nSubject:... | 1 |
2021 | From: c23st@kocrsv01.delcoelect.com (Spiros Tr... | 2 |
142 | From: ginkgo@ecsvax.uncecs.edu (J. Geary Morto... | 1 |
... | ... | ... |
510 | From: mary@uicsl.csl.uiuc.edu (Mary E. Allison... | 3 |
1948 | From: ndd@sunbar.mc.duke.edu (Ned Danieley)\nS... | 0 |
798 | From: kk@unisql.UUCP (Kerry Kimbrough)\nSubjec... | 0 |
2222 | From: hamachi@adobe.com (Gordon Hamachi)\nSubj... | 2 |
2215 | From: mobasser@vu-vlsi.ee.vill.edu (Bijan Moba... | 2 |
2366 rows × 2 columns
In [5]:
Copied!
# Let's have a look at the first document
atom.corpus[0]
# Let's have a look at the first document
atom.corpus[0]
Out[5]:
'From: caf@omen.UUCP (Chuck Forsberg WA7KGX)\nSubject: Re: My New Diet --> IT WORKS GREAT !!!!\nOrganization: Omen Technology INC, Portland Rain Forest\nLines: 32\n\nIn article <1qk6v3INNrm6@lynx.unm.edu> bhjelle@carina.unm.edu () writes:\n>\n>Gordon Banks:\n>\n>>a lot to keep from going back to morbid obesity. I think all\n>>of us cycle. One\'s success depends on how large the fluctuations\n>>in the cycle are. Some people can cycle only 5 pounds. Unfortunately,\n>>I\'m not one of them.\n>>\n>>\n>This certainly describes my situation perfectly. For me there is\n>a constant dynamic between my tendency to eat, which appears to\n>be totally limitless, and the purely conscious desire to not\n>put on too much weight. When I get too fat, I just diet/exercise\n>more (with varying degrees of success) to take off the\n>extra weight. Usually I cycle within a 15 lb range, but\n>smaller and larger cycles occur as well. I\'m always afraid\n>that this method will stop working someday, but usually\n>I seem to be able to hold the weight gain in check.\n>This is one reason I have a hard time accepting the notion\n>of some metabolic derangement associated with cycle dieting\n>(that results in long-term weight gain). I have been cycle-\n>dieting for at least 20 years without seeing such a change.\n\nAs mentioned in Adiposity 101, only some experience weight\nrebound. The fact that you don\'t doesn\'t prove it doesn\'t\nhappen to others.\n-- \nChuck Forsberg WA7KGX ...!tektronix!reed!omen!caf \nAuthor of YMODEM, ZMODEM, Professional-YAM, ZCOMM, and DSZ\n Omen Technology Inc "The High Reliability Software"\n17505-V NW Sauvie IS RD Portland OR 97231 503-621-3406\n'
In [6]:
Copied!
# Clean the documents from noise (emails, numbers, etc...)
atom.textclean()
# Clean the documents from noise (emails, numbers, etc...)
atom.textclean()
Fitting TextCleaner... Cleaning the corpus... --> Decoding unicode characters to ascii. --> Converting text to lower case. --> Dropping emails from documents. --> Dropping URL links from documents. --> Dropping HTML tags from documents. --> Dropping emojis from documents. --> Dropping numbers from documents. --> Dropping punctuation from the text.
In [7]:
Copied!
# Check how the first document changed
atom.corpus[0]
# Check how the first document changed
atom.corpus[0]
Out[7]:
'from chuck forsberg wa7kgx\nsubject re my new diet it works great \norganization omen technology inc portland rain forest\nlines \n\nin article writes\n\ngordon banks\n\na lot to keep from going back to morbid obesity i think all\nof us cycle ones success depends on how large the fluctuations\nin the cycle are some people can cycle only pounds unfortunately\nim not one of them\n\n\nthis certainly describes my situation perfectly for me there is\na constant dynamic between my tendency to eat which appears to\nbe totally limitless and the purely conscious desire to not\nput on too much weight when i get too fat i just dietexercise\nmore with varying degrees of success to take off the\nextra weight usually i cycle within a lb range but\nsmaller and larger cycles occur as well im always afraid\nthat this method will stop working someday but usually\ni seem to be able to hold the weight gain in check\nthis is one reason i have a hard time accepting the notion\nof some metabolic derangement associated with cycle dieting\nthat results in longterm weight gain i have been cycle\ndieting for at least years without seeing such a change\n\nas mentioned in adiposity only some experience weight\nrebound the fact that you dont doesnt prove it doesnt\nhappen to others\n \nchuck forsberg wa7kgx tektronixreedomencaf \nauthor of ymodem zmodem professionalyam zcomm and dsz\n omen technology inc the high reliability software\nv nw sauvie is rd portland or \n'
In [8]:
Copied!
# Convert the strings to a sequence of words
atom.tokenize()
# Convert the strings to a sequence of words
atom.tokenize()
Fitting Tokenizer... Tokenizing the corpus...
In [9]:
Copied!
# Print the first few words of the first document
atom.corpus[0][:7]
# Print the first few words of the first document
atom.corpus[0][:7]
Out[9]:
['from', 'chuck', 'forsberg', 'wa7kgx', 'subject', 're', 'my']
In [10]:
Copied!
# Normalize the text to a predefined standard
atom.textnormalize(stopwords="english", lemmatize=True)
# Normalize the text to a predefined standard
atom.textnormalize(stopwords="english", lemmatize=True)
Fitting TextNormalizer... Normalizing the corpus... --> Dropping stopwords. --> Applying lemmatization.
In [11]:
Copied!
atom.corpus[0][:7] # Check changes...
atom.corpus[0][:7] # Check changes...
Out[11]:
['chuck', 'forsberg', 'wa7kgx', 'subject', 'new', 'diet', 'work']
In [12]:
Copied!
# Visualize the most common words with a wordcloud
atom.plot_wordcloud(figsize=(700, 500))
# Visualize the most common words with a wordcloud
atom.plot_wordcloud(figsize=(700, 500))
In [13]:
Copied!
# Have a look at the most frequent bigrams
atom.plot_ngrams(2)
# Have a look at the most frequent bigrams
atom.plot_ngrams(2)
In [14]:
Copied!
# Create the bigrams using the tokenizer
atom.tokenize(bigram_freq=215)
# Create the bigrams using the tokenizer
atom.tokenize(bigram_freq=215)
Fitting Tokenizer... Tokenizing the corpus... --> Creating 7 bigrams on 3128 locations.
In [15]:
Copied!
atom.bigrams_
atom.bigrams_
Out[15]:
bigram | frequency | |
---|---|---|
0 | x_x | 1168 |
1 | line_article | 532 |
2 | line_nntppostinghost | 389 |
3 | organization_university | 331 |
4 | gordon_bank | 266 |
5 | distribution_usa | 227 |
6 | line_distribution | 215 |
In [16]:
Copied!
# As a last step before modelling, convert the words to vectors
atom.vectorize(strategy="tfidf")
# As a last step before modelling, convert the words to vectors
atom.vectorize(strategy="tfidf")
Fitting Vectorizer... Vectorizing the corpus...
In [17]:
Copied!
# The dimensionality of the dataset has increased a lot!
atom.shape
# The dimensionality of the dataset has increased a lot!
atom.shape
Out[17]:
(2366, 24176)
In [18]:
Copied!
# Note that the data is sparse and the columns are named
# after the words they are embedding
atom.dtypes
# Note that the data is sparse and the columns are named
# after the words they are embedding
atom.dtypes
Out[18]:
corpus_000000e5 Sparse[float64, 0] corpus_00000ee5 Sparse[float64, 0] corpus_000010af Sparse[float64, 0] corpus_0007259d Sparse[float64, 0] corpus_00072a27 Sparse[float64, 0] ... corpus_zurich Sparse[float64, 0] corpus_zvi Sparse[float64, 0] corpus_zx Sparse[float64, 0] corpus_zz Sparse[float64, 0] target int64 Length: 24176, dtype: object
In [19]:
Copied!
# When the dataset is sparse, stats() shows the density
atom.stats()
# When the dataset is sparse, stats() shows the density
atom.stats()
Dataset stats ==================== >> Shape: (2366, 24176) Train set size: 1657 Test set size: 709 ------------------------------------- Memory: 2.54 MB Sparse: True Density: 0.35%
In [20]:
Copied!
# Check which models have support for sparse matrices
atom.available_models(accepts_sparse=True)
# Check which models have support for sparse matrices
atom.available_models(accepts_sparse=True)
Out[20]:
acronym | fullname | estimator | module | handles_missing | needs_scaling | accepts_sparse | native_multilabel | native_multioutput | validation | supports_engines | |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | AdaB | AdaBoost | AdaBoostClassifier | sklearn.ensemble._weight_boosting | False | False | True | False | False | None | sklearn |
1 | Bag | Bagging | BaggingClassifier | sklearn.ensemble._bagging | True | False | True | False | False | None | sklearn |
2 | BNB | BernoulliNB | BernoulliNB | sklearn.naive_bayes | False | False | True | False | False | None | sklearn, cuml |
3 | CatB | CatBoost | CatBoostClassifier | catboost.core | True | True | True | False | False | n_estimators | catboost |
4 | CatNB | CategoricalNB | CategoricalNB | sklearn.naive_bayes | False | False | True | False | False | None | sklearn, cuml |
5 | CNB | ComplementNB | ComplementNB | sklearn.naive_bayes | False | False | True | False | False | None | sklearn, cuml |
6 | Tree | DecisionTree | DecisionTreeClassifier | sklearn.tree._classes | True | False | True | True | True | None | sklearn |
7 | ETree | ExtraTree | ExtraTreeClassifier | sklearn.tree._classes | False | False | True | True | True | None | sklearn |
8 | ET | ExtraTrees | ExtraTreesClassifier | sklearn.ensemble._forest | False | False | True | True | True | None | sklearn |
9 | GBM | GradientBoostingMachine | GradientBoostingClassifier | sklearn.ensemble._gb | False | False | True | False | False | None | sklearn |
10 | KNN | KNearestNeighbors | KNeighborsClassifier | sklearn.neighbors._classification | False | True | True | True | True | None | sklearn, sklearnex, cuml |
11 | LGB | LightGBM | LGBMClassifier | lightgbm.sklearn | True | True | True | False | False | n_estimators | lightgbm |
12 | lSVM | LinearSVM | LinearSVC | sklearn.svm._classes | False | True | True | False | False | None | sklearn, cuml |
13 | LR | LogisticRegression | LogisticRegression | sklearn.linear_model._logistic | False | True | True | False | False | None | sklearn, sklearnex, cuml |
14 | MLP | MultiLayerPerceptron | MLPClassifier | sklearn.neural_network._multilayer_perceptron | False | True | True | True | False | max_iter | sklearn |
15 | MNB | MultinomialNB | MultinomialNB | sklearn.naive_bayes | False | False | True | False | False | None | sklearn, cuml |
16 | PA | PassiveAggressive | PassiveAggressiveClassifier | sklearn.linear_model._passive_aggressive | False | True | True | False | False | max_iter | sklearn |
17 | RNN | RadiusNearestNeighbors | RadiusNeighborsClassifier | sklearn.neighbors._classification | False | True | True | True | True | None | sklearn |
18 | RF | RandomForest | RandomForestClassifier | sklearn.ensemble._forest | False | False | True | True | True | None | sklearn, sklearnex, cuml |
19 | Ridge | Ridge | RidgeClassifier | sklearn.linear_model._ridge | False | True | True | True | False | None | sklearn, sklearnex, cuml |
20 | SGD | StochasticGradientDescent | SGDClassifier | sklearn.linear_model._stochastic_gradient | False | True | True | False | False | max_iter | sklearn |
21 | SVM | SupportVectorMachine | SVC | sklearn.svm._classes | False | True | True | False | False | None | sklearn, sklearnex, cuml |
22 | XGB | XGBoost | XGBClassifier | xgboost.sklearn | True | True | True | False | False | n_estimators | xgboost |
In [21]:
Copied!
# Train the model
atom.run(models="RF", metric="f1_weighted")
# Train the model
atom.run(models="RF", metric="f1_weighted")
Training ========================= >> Models: RF Metric: f1_weighted Results for RandomForest: Fit --------------------------------------------- Train evaluation --> f1_weighted: 1.0 Test evaluation --> f1_weighted: 0.9181 Time elapsed: 03m:05s ------------------------------------------------- Time: 03m:05s Final results ==================== >> Total time: 03m:05s ------------------------------------- RandomForest --> f1_weighted: 0.9181
Analyze the results¶
In [22]:
Copied!
atom.evaluate()
atom.evaluate()
Out[22]:
ba | f1_weighted | jaccard_weighted | mcc | precision_weighted | recall_weighted | |
---|---|---|---|---|---|---|
RF | 0.918300 | 0.918100 | 0.848600 | 0.891800 | 0.920600 | 0.918200 |
In [23]:
Copied!
atom.plot_confusion_matrix(figsize=(700, 600))
atom.plot_confusion_matrix(figsize=(700, 600))
In [24]:
Copied!
atom.plot_shap_decision(rows=0, show=15)
atom.plot_shap_decision(rows=0, show=15)
In [25]:
Copied!
atom.plot_shap_beeswarm(target=0, show=15)
atom.plot_shap_beeswarm(target=0, show=15)
100%|===================| 2824/2836 [02:27<00:00]