Natural Language Processing¶
This example shows how to use ATOM to quickly go from raw text data to model predictions.
Import the 20 newsgroups text dataset from sklearn.datasets. The dataset comprises around 18000 articles on 20 topics. The goal is to predict the topic of every article.
Load the data¶
In [2]:
Copied!
import numpy as np
from atom import ATOMClassifier
from sklearn.datasets import fetch_20newsgroups
import numpy as np
from atom import ATOMClassifier
from sklearn.datasets import fetch_20newsgroups
In [3]:
Copied!
# Use only a subset of the available topics for faster processing
X_text, y_text = fetch_20newsgroups(
return_X_y=True,
categories=[
'alt.atheism',
'sci.med',
'comp.windows.x',
'misc.forsale',
'rec.autos',
],
shuffle=True,
random_state=1,
)
X_text = np.array(X_text).reshape(-1, 1)
# Use only a subset of the available topics for faster processing
X_text, y_text = fetch_20newsgroups(
return_X_y=True,
categories=[
'alt.atheism',
'sci.med',
'comp.windows.x',
'misc.forsale',
'rec.autos',
],
shuffle=True,
random_state=1,
)
X_text = np.array(X_text).reshape(-1, 1)
Run the pipeline¶
In [4]:
Copied!
atom = ATOMClassifier(X_text, y_text, test_size=0.3, verbose=2, warnings=False)
atom = ATOMClassifier(X_text, y_text, test_size=0.3, verbose=2, warnings=False)
<< ================== ATOM ================== >> Algorithm task: multiclass classification. Dataset stats ==================== >> Shape: (2846, 2) Scaled: False Categorical features: 1 (100.0%) ------------------------------------- Train set size: 1993 Test set size: 853 ------------------------------------- | | dataset | train | test | | -- | ----------- | ----------- | ----------- | | 0 | 480 (1.0) | 327 (1.0) | 153 (1.0) | | 1 | 593 (1.2) | 408 (1.2) | 185 (1.2) | | 2 | 585 (1.2) | 416 (1.3) | 169 (1.1) | | 3 | 594 (1.2) | 426 (1.3) | 168 (1.1) | | 4 | 594 (1.2) | 416 (1.3) | 178 (1.2) |
In [5]:
Copied!
atom.dataset # Note that the feature is automatically named 'corpus'
atom.dataset # Note that the feature is automatically named 'corpus'
Out[5]:
corpus | target | |
---|---|---|
0 | From: yuri@physics.heriot-watt.ac.UK (Yuri Rzh... | 1 |
1 | From: tron@fafnir.la.locus.com (Michael Trofim... | 4 |
2 | From: suresh@iss.nus.sg (Suresh Thennarangam -... | 1 |
3 | From: jp@vllyoak.resun.com (Jeff Perry)\nSubje... | 3 |
4 | From: gld@cunixb.cc.columbia.edu (Gary L Dare)... | 2 |
... | ... | ... |
2841 | From: klute@tommy.informatik.uni-dortmund.de (... | 1 |
2842 | From: panvalka@cs.unc.edu (Anay Panvalkar)\nSu... | 1 |
2843 | From: gse9k@uvacs.cs.Virginia.EDU (Scott Evans... | 1 |
2844 | From: uschelp3@idbsu.edu (Mike Madson)\nSubjec... | 2 |
2845 | From: dkfox@uxa.cso.uiuc.edu (fox darin k)\nSu... | 2 |
2846 rows × 2 columns
In [6]:
Copied!
# Let's have a look at the first document
atom.corpus[0]
# Let's have a look at the first document
atom.corpus[0]
Out[6]:
"From: yuri@physics.heriot-watt.ac.UK (Yuri Rzhanov)\nSubject: REPOST: XView slider\nOrganization: The Internet\nLines: 37\nNNTP-Posting-Host: enterpoop.mit.edu\nTo: xpert <xpert@expo.lcs.mit.edu>\n\nHi Xperts,\n\nthis is a repost (no one responded to my desperate yell 8-(\nI can't believe there is no XView wizards any more 8-)...\n\nI'm using sliders in my XView apps, usually with editable numeric\nfield. But I seem to have no control over the length of this field.\nIn some apps it appears long enough to keep several characters,\nin some - it cannot keep even the maximum value set by \nPANEL_MAX_VALUE! \n\nAs I understand, PANEL_VALUE_DISPLAY_LENGTH, which controls\nnumber of characters to be displayed in text items, doesn't\nwork in the case of slider, despite the fact that <panel.h>\ncontains the following bit:\n\n\t/* Panel_multiline_text_item, Panel_numeric_text_item,\n\t * Panel_slider_item and Panel_text_item attributes\n\t */\n\tPANEL_NOTIFY_LEVEL\t= PANEL_ATTR(ATTR_ENUM,\t\t\t 152),\n\tPANEL_VALUE_DISPLAY_LENGTH\t= PANEL_ATTR(ATTR_INT,\t\t 182),\n\nwhich gives a hint that this attribute can be used for sliders.\nBut 1) setting this attribute gives nothing, and 2) xv_get'ting\nthis attribute gives warning: Bad attribute, and returns value 0.\n\nStrange thing is that DEC's port of XView gives plenty of space\nin a text fields, but not Sun's Xview...\n\nCan someone share his experience in managing sliders in XView with me,\nand clear this problem? \n\nAny help is very much appreciated.\n\nYuri\n\nyuri@uk.ac.hw.phy\n"
In [7]:
Copied!
# Clean the documents from noise (emails, numbers, etc...)
atom.textclean()
# Clean the documents from noise (emails, numbers, etc...)
atom.textclean()
Filtering the corpus... --> Decoding unicode characters to ascii. --> Converting text to lower case. --> Dropping 10012 emails from 2830 documents. --> Dropping 0 URL links from 0 documents. --> Dropping 2214 HTML tags from 1304 documents. --> Dropping 2 emojis from 1 documents. --> Dropping 31222 numbers from 2843 documents. --> Dropping punctuation from the text.
In [8]:
Copied!
# Have a look at the removed items
atom.drops
# Have a look at the removed items
atom.drops
Out[8]:
url | html | emoji | number | ||
---|---|---|---|---|---|
0 | [yuri@physics.heriot-watt.ac.uk, xpert@expo.lc... | NaN | [<>, <panel.h>] | NaN | [37, 8, 8, 152, 182, 1, 2, 0] |
1 | [tron@fafnir.la.locus.com, tron@locus.com] | NaN | [< tron >] | NaN | [12] |
2 | [suresh@iss.nus.sg, pyeatt@texaco.com, 9304191... | NaN | [<>] | NaN | [1, 1, 47, 6000, 065, 772, 2588, 065, 778, 257... |
3 | [jp@vllyoak.resun.com, aas7@po.cwru.edu, dspal... | NaN | NaN | NaN | [35, 93] |
4 | [gld@cunixb.cc.columbia.edu, gld@cunixb.cc.col... | NaN | NaN | NaN | [70, 23, 70, 2, 70] |
... | ... | ... | ... | ... | ... |
1297 | NaN | NaN | NaN | NaN | [15, 1, 1, 1097, 08836, 908, 563, 9033, 908, 5... |
1595 | NaN | NaN | NaN | NaN | [13, 93, 212, 274, 0646, 1097, 08836, 908, 563... |
1768 | NaN | NaN | NaN | NaN | [223, 250, 10, 8, 8, 2002, 1600] |
1984 | NaN | NaN | NaN | NaN | [15, 1, 1] |
2671 | NaN | NaN | NaN | NaN | [27, 15, 27, 225, 250, 412, 624, 6115, 371, 0154] |
2846 rows × 5 columns
In [9]:
Copied!
# Check how the first document changed
atom.corpus[0]
# Check how the first document changed
atom.corpus[0]
Out[9]:
'from yuri rzhanov\nsubject repost xview slider\norganization the internet\nlines \nnntppostinghost enterpoopmitedu\nto xpert \n\nhi xperts\n\nthis is a repost no one responded to my desperate yell \ni cant believe there is no xview wizards any more \n\nim using sliders in my xview apps usually with editable numeric\nfield but i seem to have no control over the length of this field\nin some apps it appears long enough to keep several characters\nin some it cannot keep even the maximum value set by \npanelmaxvalue \n\nas i understand panelvaluedisplaylength which controls\nnumber of characters to be displayed in text items doesnt\nwork in the case of slider despite the fact that \ncontains the following bit\n\n\t panelmultilinetextitem panelnumerictextitem\n\t panelslideritem and paneltextitem attributes\n\t \n\tpanelnotifylevel\t panelattrattrenum\t\t\t \n\tpanelvaluedisplaylength\t panelattrattrint\t\t \n\nwhich gives a hint that this attribute can be used for sliders\nbut setting this attribute gives nothing and xvgetting\nthis attribute gives warning bad attribute and returns value \n\nstrange thing is that decs port of xview gives plenty of space\nin a text fields but not suns xview\n\ncan someone share his experience in managing sliders in xview with me\nand clear this problem \n\nany help is very much appreciated\n\nyuri\n\n\n'
In [10]:
Copied!
# Convert the strings to a sequence of words
atom.tokenize()
# Convert the strings to a sequence of words
atom.tokenize()
Tokenizing the corpus...
In [11]:
Copied!
# Print the first few words of the first document
atom.corpus[0][:7]
# Print the first few words of the first document
atom.corpus[0][:7]
Out[11]:
['from', 'yuri', 'rzhanov', 'subject', 'repost', 'xview', 'slider']
In [12]:
Copied!
# Normalize the text to a predefined standard
atom.normalize(stopwords="english", lemmatize=True)
# Normalize the text to a predefined standard
atom.normalize(stopwords="english", lemmatize=True)
Normalizing the corpus... --> Dropping stopwords. --> Applying lemmatization.
In [13]:
Copied!
atom.corpus[0][:7] # Check changes...
atom.corpus[0][:7] # Check changes...
Out[13]:
['yuri', 'rzhanov', 'subject', 'repost', 'xview', 'slider', 'organization']
In [14]:
Copied!
# Visualize the most common words with a wordcloud
atom.plot_wordcloud()
# Visualize the most common words with a wordcloud
atom.plot_wordcloud()
In [15]:
Copied!
# Have a look at the most frequent bigrams
atom.plot_ngrams(2)
# Have a look at the most frequent bigrams
atom.plot_ngrams(2)
In [16]:
Copied!
# Create the bigrams using the tokenizer
atom.tokenize(bigram_freq=215)
# Create the bigrams using the tokenizer
atom.tokenize(bigram_freq=215)
Tokenizing the corpus... --> Creating 10 bigrams on 4178 locations.
In [17]:
Copied!
atom.bigrams
atom.bigrams
Out[17]:
bigram | frequency | |
---|---|---|
9 | (x, x) | 1169 |
3 | (line, article) | 714 |
0 | (line, nntppostinghost) | 493 |
6 | (organization, university) | 367 |
5 | (gordon, bank) | 266 |
7 | (line, distribution) | 258 |
8 | (distribution, world) | 249 |
1 | (distribution, usa) | 229 |
2 | (usa, line) | 217 |
4 | (computer, science) | 216 |
In [18]:
Copied!
# As a last step before modelling, convert the words to vectors
atom.vectorize(strategy="tf-idf")
# As a last step before modelling, convert the words to vectors
atom.vectorize(strategy="tf-idf")
Vectorizing the corpus...
In [19]:
Copied!
# The dimensionality of the dataset has increased a lot!
atom.shape
# The dimensionality of the dataset has increased a lot!
atom.shape
Out[19]:
(2846, 27955)
In [20]:
Copied!
# Note that the data is sparse and the columns are named
# after the words they are embedding
atom.dtypes
# Note that the data is sparse and the columns are named
# after the words they are embedding
atom.dtypes
Out[20]:
00 Sparse[float64, 0] 000 Sparse[float64, 0] 0000am Sparse[float64, 0] 000cc Sparse[float64, 0] 000miles Sparse[float64, 0] ... zvi Sparse[float64, 0] zvonko Sparse[float64, 0] zx Sparse[float64, 0] zzzs Sparse[float64, 0] target_y int64 Length: 27955, dtype: object
In [21]:
Copied!
# When the dataset is sparse, stats() shows the sparsity
atom.stats()
# When the dataset is sparse, stats() shows the sparsity
atom.stats()
Dataset stats ==================== >> Shape: (2846, 27955) Sparse: True Density: 0.32% ------------------------------------- Train set size: 1993 Test set size: 853 ------------------------------------- | | dataset | train | test | | -- | ----------- | ----------- | ----------- | | 0 | 480 (1.0) | 327 (1.0) | 153 (1.0) | | 1 | 593 (1.2) | 408 (1.2) | 185 (1.2) | | 2 | 585 (1.2) | 416 (1.3) | 169 (1.1) | | 3 | 594 (1.2) | 426 (1.3) | 168 (1.1) | | 4 | 594 (1.2) | 416 (1.3) | 178 (1.2) |
In [22]:
Copied!
# Check which models have support for sparse matrices
atom.available_models()[["acronym", "fullname", "accepts_sparse"]]
# Check which models have support for sparse matrices
atom.available_models()[["acronym", "fullname", "accepts_sparse"]]
Out[22]:
acronym | fullname | accepts_sparse | |
---|---|---|---|
0 | Dummy | Dummy Estimator | False |
1 | GP | Gaussian Process | False |
2 | GNB | Gaussian Naive Bayes | False |
3 | MNB | Multinomial Naive Bayes | True |
4 | BNB | Bernoulli Naive Bayes | True |
5 | CatNB | Categorical Naive Bayes | True |
6 | CNB | Complement Naive Bayes | True |
7 | Ridge | Ridge Estimator | True |
8 | Perc | Perceptron | False |
9 | LR | Logistic Regression | True |
10 | LDA | Linear Discriminant Analysis | False |
11 | QDA | Quadratic Discriminant Analysis | False |
12 | KNN | K-Nearest Neighbors | True |
13 | RNN | Radius Nearest Neighbors | True |
14 | Tree | Decision Tree | True |
15 | Bag | Bagging | True |
16 | ET | Extra-Trees | True |
17 | RF | Random Forest | True |
18 | AdaB | AdaBoost | True |
19 | GBM | Gradient Boosting Machine | True |
20 | hGBM | HistGBM | False |
21 | XGB | XGBoost | True |
22 | LGB | LightGBM | True |
23 | CatB | CatBoost | True |
24 | lSVM | Linear-SVM | True |
25 | kSVM | Kernel-SVM | True |
26 | PA | Passive Aggressive | True |
27 | SGD | Stochastic Gradient Descent | True |
28 | MLP | Multi-layer Perceptron | True |
In [23]:
Copied!
# Train the model
atom.run(models="MLP", metric="f1_weighted")
# Train the model
atom.run(models="MLP", metric="f1_weighted")
Training ========================= >> Models: MLP Metric: f1_weighted Results for Multi-layer Perceptron: Fit --------------------------------------------- Train evaluation --> f1_weighted: 1.0 Test evaluation --> f1_weighted: 0.9661 Time elapsed: 1m:40s ------------------------------------------------- Total time: 1m:40s Final results ==================== >> Duration: 1m:40s ------------------------------------- Multi-layer Perceptron --> f1_weighted: 0.9661
Analyze results¶
In [24]:
Copied!
atom.evaluate()
atom.evaluate()
Out[24]:
balanced_accuracy | f1_weighted | jaccard_weighted | matthews_corrcoef | precision_weighted | recall_weighted | |
---|---|---|---|---|---|---|
MLP | 0.96629 | 0.966059 | 0.934813 | 0.957464 | 0.966144 | 0.966002 |
In [25]:
Copied!
atom.plot_confusion_matrix(figsize=(10, 10))
atom.plot_confusion_matrix(figsize=(10, 10))