Natural Language Toolkit Tutorial
Natural Language Toolkit Tutorial
i
Natural Language Processing Toolkit
Audience
This tutorial will be useful for graduates, post-graduates, and research students who either
have an interest in NLP or have this subject as a part of their curriculum. The reader can
be a beginner or an advanced learner.
Prerequisites
The reader must have basic knowledge about artificial intelligence. He/she should also be
aware of basic terminologies used in English grammar and Python programming concepts.
All the content and graphics published in this e-book are the property of Tutorials Point (I)
Pvt. Ltd. The user of this e-book is prohibited to reuse, retain, copy, distribute or republish
any contents or a part of contents of this e-book in any manner without written consent
of the publisher.
We strive to update the contents of our website and tutorials as timely and as precisely as
possible, however, the contents may contain inaccuracies or errors. Tutorials Point (I) Pvt.
Ltd. provides no guarantee regarding the accuracy, timeliness or completeness of our
website or its contents including this tutorial. If you discover any errors on our website or
in this tutorial, please notify us at [email protected]
ii
Natural Language Processing Toolkit
Table of Contents
About the Tutorial ........................................................................................................................................... ii
Audience .......................................................................................................................................................... ii
Prerequisites .................................................................................................................................................... ii
1. NLTK — Introduction................................................................................................................................. 1
iii
Natural Language Processing Toolkit
What is Stemming?........................................................................................................................................ 25
NLTK Package................................................................................................................................................. 78
vi
1. NLTK — Introduction Natural Language Processing Toolkit
Natural language is that subfield of computer science, more specifically of AI, which
enables computers/machines to understand, process and manipulate human language. In
simple words, NLP is a way of machines to analyze, understand and derive meaning from
human natural languages like Hindi, English, French, Dutch, etc.
How humans know what word means what? The answer to this question is that we learn
through our experience. But, how do machines/computers learn the same?
First, we need to feed the machines with enough data so that machines can learn
from experience.
Then machine will create word vectors, by using deep learning algorithms, from
the data we fed earlier as well as from its surrounding data.
1
Natural Language Processing Toolkit
Components of NLP
Following diagram represents the components of natural language processing (NLP):
Input sentence
Morphological
Processing
Lexicon
Syntax Analysis
Grammar
Semantic
Semantic rules
Analysis
Contextual Pragmatic
information Analysis
Target representation
Morphological Processing
Morphological processing is the first component of NLP. It includes breaking of chunks of
language input into sets of tokens corresponding to paragraphs, sentences and words. For
example, a word like “everyday” can be broken into two sub-word tokens as “every-
day”.
Syntax analysis
Syntax Analysis, the second component, is one of the most important components of NLP.
The purposes of this component are as follows:
2
Natural Language Processing Toolkit
To break it up into a structure that shows the syntactic relationships between the
different words.
E.g. The sentences like “The school goes to the student” would be rejected by
syntax analyzer.
Semantic analysis
Semantic Analysis is the third component of NLP which is used to check the meaningfulness
of the text. It includes drawing exact meaning, or we can say dictionary meaning from the
text. E.g. The sentences like “It’s a hot ice-cream.” would be discarded by semantic
analyzer.
Pragmatic analysis
Pragmatic analysis is the fourth component of NLP. It includes fitting the actual objects or
events that exist in each context with object references obtained by previous component
i.e. semantic analysis. E.g. The sentences like “Put the fruits in the basket on the
table” can have two semantic interpretations hence the pragmatic analyzer will choose
between these two possibilities.
Machine Translation
Machine translation (MT) is one of the most important applications of natural language
processing. MT is basically a process of translating one source language or text into
another language. Machine translation system can be of either Bilingual or Multilingual.
Fighting Spam
Due to enormous increase in unwanted emails, spam filters have become important
because it is the first line of defense against this problem. By considering its false-positive
and false-negative issues as the main issues, the functionality of NLP can be used to
develop spam filtering system.
N-gram modelling, Word Stemming and Bayesian classification are some of the existing
NLP models that can be used for spam filtering.
3
Natural Language Processing Toolkit
Grammar Correction
Spelling correction & grammar correction is a very useful feature of word processor
software like Microsoft Word. Natural language processing (NLP) is widely used for this
purpose.
Question-answering
Question-answering, another main application of natural language processing (NLP),
focuses on building systems which automatically answer the question posted by user in
their natural language.
Sentiment analysis
Sentiment analysis is among one other important applications of natural language
processing (NLP). As its name implies, Sentiment analysis is used to:
Online E-commerce companies like Amazon, ebay, etc., are using sentiment analysis to
identify the opinion and sentiment of their customers online. It will help them to
understand what their customers think about their products and services.
Speech engines
Speech engines like Siri, Google Voice, Alexa are built on NLP so that we can communicate
with them in our natural language.
Implementing NLP
In order to build the above-mentioned applications, we need to have specific skill set with
a great understanding of language and tools to process the language efficiently. To achieve
this, we have various open-source tools available. Some of them are open-sourced while
others are developed by organizations to build their own NLP applications. Following is the
list of some NLP tools:
4
Natural Language Processing Toolkit
Stanford toolkit
5
2. NLTK ― Getting Started Natural Language Processing Toolkit
In order to install NLTK, we must have Python installed on our computers. You can go to
the link https://www.python.org/downloads/ and select the latest version for your OS i.e.
Windows, Mac and Linux/Unix. For basic tutorial on Python you can refer to the link
https://www.tutorialspoint.com/python3/index.htm.
Now, once you have Python installed on your computer system, let us understand how we
can install NLTK.
Installing NLTK
We can install NLTK on various OS as follows:
On Windows
In order to install NLTK on Windows OS, follow the below steps:
First, open the Windows command prompt and navigate to the location of the pip
folder.
Next, enter the following command to install NLTK:
Now, open the PythonShell from Windows Start Menu and type the following command in
order to verify NLTK’s installation:
6
Natural Language Processing Toolkit
Import nltk
If you get no error, you have successfully installed NLTK on your Windows OS having
Python3.
On Mac/Linux
In order to install NLTK on Mac/Linux OS, write the following command:
If you don’t have pip installed on your computer, then follow the instruction given below
to first install pip:
Through Anaconda
In order to install NLTK through Anaconda, follow the below steps:
7
Natural Language Processing Toolkit
Once you have Anaconda on your computer system, go to its command prompt and write
the following command:
You need to review the output and enter ‘yes’. NLTK will be downloaded and installed in
your Anaconda package.
With the help of following commands, we can download all the NLTK datasets:
import nltk
nltk.download()
8
Natural Language Processing Toolkit
import nltk
Now, import the PorterStemmer class to implement the Porter Stemmer algorithm.
word_stemmer = PorterStemmer()
word_stemmer.stem('writing')
Output
'write'
word_stemmer.stem('eating')
Output
'eat'
10
3. NLTK — Tokenizing Text Natural Language Processing Toolkit
What is Tokenizing?
It may be defined as the process of breaking up a piece of text into smaller parts, such as
sentences and words. These smaller parts are called tokens. For example, a word is a
token in a sentence, and a sentence is a token in a paragraph.
As we know that NLP is used to build applications such as sentiment analysis, QA systems,
language translation, smart chatbots, voice systems, etc., hence, in order to build them,
it becomes vital to understand the pattern in the text. The tokens, mentioned above, are
very useful in finding and understanding these patterns. We can consider tokenization as
the base step for other recipes such as stemming and lemmatization.
NLTK package
nltk.tokenize is the package provided by NLTK module to achieve the process of
tokenization.
word_tokenize module
word_tokenize module is used for basic word tokenization. Following example will use
this module to split a sentence into words.
Example
import nltk
from nltk.tokenize import word_tokenize
word_tokenize('Tutorialspoint.com provides high quality technical tutorials for
free.')
Output
TreebankWordTokenizer Class
word_tokenize module, used above is basically a wrapper function that calls tokenize()
function as an instance of the TreebankWordTokenizer class. It will give the same
11
Natural Language Processing Toolkit
output as we get while using word_tokenize() module for splitting the sentences into word.
Let us see the same example implemented above:
Example
First, we need to import the natural language toolkit(nltk).
import nltk
Now, import the TreebankWordTokenizer class to implement the word tokenizer algorithm:
Tokenizer_wrd = TreebankWordTokenizer()
Output
import nltk
from nltk.tokenize import TreebankWordTokenizer
tokenizer_wrd = TreebankWordTokenizer()
tokenizer_wrd.tokenize('Tutorialspoint.com provides high quality technical
tutorials for free.')
Output
Example
import nltk
from nltk.tokenize import word_tokenize
word_tokenize('won’t')
12
Natural Language Processing Toolkit
Output
['wo', "n't"]]
WordPunktTokenizer Class
An alternative word tokenizer that splits all punctuation into separate tokens. Let us
understand it with the following simple example:
Example
from nltk.tokenize import WordPunctTokenizer
tokenizer = WordPunctTokenizer()
tokenizer.tokenize(" I can't allow you to go home early")
Output
['I', 'can', "'", 't', 'allow', 'you', 'to', 'go', 'home', 'early']
Why is it needed?
An obvious question that came in our mind is that when we have word tokenizer then why
do we need sentence tokenizer or why do we need to tokenize text into sentences. Suppose
we need to count average words in sentences, how we can do this? For accomplishing this
task, we need both sentence tokenization and word tokenization.
Let us understand the difference between sentence and word tokenizer with the help of
following simple example:
Example
import nltk
from nltk.tokenize import sent_tokenize
text = "Let us understand the difference between sentence & word tokenizer. It
is going to be a simple example."
sent_tokenize(text)
Output
13
Natural Language Processing Toolkit
["Let us understand the difference between sentence & word tokenizer.", 'It is
going to be a simple example.']
Let us understand the concept with the help of two examples below.
In first example we will be using regular expression for matching alphanumeric tokens
plus single quotes so that we don’t split contractions like “won’t”.
Example 1
import nltk
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer("[\w']+")
tokenizer.tokenize("won't is a contraction.")
tokenizer.tokenize("can't is a contraction.")
Output
Example 2
import nltk
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer('/s+' , gaps = True)
tokenizer.tokenize("won't is a contraction.")
Output
From the above output, we can see that the punctuation remains in the tokens. The
parameter gaps = True means the pattern is going to identify the gaps to tokenize on. On
the other hand, if we will use gaps = False parameter then the pattern would be used to
identify the tokens which can be seen in following example:
import nltk
14
Natural Language Processing Toolkit
Output
[ ]
15
4. NLTK — Training Tokenizer & Filtering Natural Language Processing Toolkit
Stopwords
Implementation Example
For this example, we will be using the webtext corpus. The text file which we are going to
use from this corpus is having the text formatted as dialogs shown below:
We have saved this text file with the name of training_tokenizer. NLTK provides a class
named PunktSentenceTokenizer with the help of which we can train on raw text to
produce a custom sentence tokenizer. We can get raw text either by reading in a file or
from an NLTK corpus using the raw() method.
Let us see the example below to get more insight into it:
Next, by using raw() method, get the raw text from training_tokenizer.txt file as
follows:
text = webtext.raw('C://Users/Leekha/training_tokenizer.txt')
16
Natural Language Processing Toolkit
sent_tokenizer = PunktSentenceTokenizer(text)
sents_1 = sent_tokenizer.tokenize(text)
print(sents_1[0])
Output
White guy: So, do you have any plans for this evening?
print(sents_1[1])
Output:
Asian girl: Yeah, being angry!
print(sents_1[670])
Output:
Guy: A hundred bucks?
print(sents_1[675])
Output:
Girl: But you already have a Big Mac...
Output
White guy: So, do you have any plans for this evening?
To understand the difference between NLTK’s default sentence tokenizer and our own
trained sentence tokenizer, let us tokenize the same file with default sentence tokenizer
i.e. sent_tokenize().
print(sents_2[0])
Output:
17
Natural Language Processing Toolkit
White guy: So, do you have any plans for this evening?
print(sents_2[675])
Output:
Hobo: Y'know what I'd do if I was rich?
With the help of difference in the output, we can understand the concept that why it is
useful to train our own sentence tokenizer.
english_stops = set(stopwords.words('english'))
words = ['I', 'am', 'a', 'writer']
[word for word in words if word not in english_stops]
Output
['I', 'writer']
Output
['I', 'writer']
18
Natural Language Processing Toolkit
Output
19
5. NLTK ― Looking up words in Wordnet Natural Language Processing Toolkit
What is Wordnet?
Wordnet is a large lexical database of English, which was created by Princeton. It is a part
of the NLTK corpus. Nouns, verbs, adjectives and adverbs all are grouped into set of
synsets, i.e., cognitive synonyms. Here each set of synsets express a distinct meaning.
Following are some use cases of Wordnet:
Synset instances
Synset are groupings of synonyms words that express the same concept. When you use
Wordnet to look up words, you will get a list of Synset instances.
wordnet.synsets(word)
To get a list of Synsets, we can look up any word in Wordnet by using
wordnet.synsets(word). For example, in next Python recipe, we are going to look up
the Synset for the ‘dog’ along with some properties and methods of Synset:
Example
Now, provide the word you want to look up the Synset for:
syn = wn.synsets('dog')[0]
20
Natural Language Processing Toolkit
Here, we are using name() method to get the unique name for the synset which can be
used to get the Synset directly:
syn.name()
Output:
'dog.n.01'
Next, we are using definition() method which will give us the definition of the word:
syn.definition()
Output:
'a member of the genus Canis (probably descended from the common wolf) that has
been domesticated by man since prehistoric times; occurs in many breeds'
Another method is examples() which will give us the examples related to the word:
syn.examples()
Output:
Getting Hypernyms
Synsets are organized in an inheritance tree like structure in which Hypernyms represents
more abstracted terms while Hyponyms represents the more specific terms. One of the
important things is that this tree can be traced all the way to a root hypernym. Let us
understand the concept with the help of the following example:
Output
[Synset('canine.n.02'), Synset('domestic_animal.n.01')]
Here, we can see that canine and domestic_animal are the hypernyms of ‘dog’.
21
Natural Language Processing Toolkit
syn.hypernyms()[0].hyponyms()
Output
[Synset('bitch.n.04'),
Synset('dog.n.01'),
Synset('fox.n.01'),
Synset('hyena.n.01'),
Synset('jackal.n.01'),
Synset('wild_dog.n.01'),
Synset('wolf.n.01')]
From the above output, we can see that ‘dog’ is only one of the many hyponyms of
‘domestic_animals’.
To find the root of all these, we can use the following command:
syn.root_hypernyms()
Output
[Synset('entity.n.01')]
From the above output, we can see it has only one root.
Output
[Synset('entity.n.01')]
22
Natural Language Processing Toolkit
Lemmas in Wordnet
In linguistics, the canonical form or morphological form of a word is called a lemma. To
find a synonym as well as antonym of a word, we can also lookup lemmas in WordNet. Let
us see how.
Finding Synonyms
By using the lemma() method, we can find the number of synonyms of a Synset. Let us
apply this method on ‘dog’ synset:
Example
Output
lemmas[0].name()
Output:
'dog'
lemmas[1].name()
Output:
'domestic_dog'
lemmas[2].name()
Output:
'Canis_familiaris'
Actually, a Synset represents a group of lemmas that all have similar meaning while a
lemma represents a distinct word form.
Finding Antonyms
In WordNet, some lemmas also have antonyms. For example, the word ‘good ‘has a total
of 27 synets, among them, 5 have lemmas with antonyms. Let us find the antonyms
(when the word ‘good’ used as noun and when the word ‘good’ used as adjective).
23
Natural Language Processing Toolkit
Example 1
from nltk.corpus import wordnet as wn
syn1 = wn.synset('good.n.02')
antonym1 = syn1.lemmas()[0].antonyms()[0]
antonym1.name()
Output
'evil'
antonym1.synset().definition()
Output
The above example shows that the word ‘good’, when used as noun, have the first
antonym ‘evil’.
Example 2
from nltk.corpus import wordnet as wn
syn2 = wn.synset('good.a.01')
antonym2 = syn2.lemmas()[0].antonyms()[0]
antonym2.name()
Output
'bad'
antonym2.synset().definition()
Output
The above example shows that the word ‘good’, when used as adjective, have the first
antonym ‘bad’.
24
6. NLTK ― Stemming & Lemmatization Natural Language Processing Toolkit
What is Stemming?
Stemming is a technique used to extract the base form of the words by removing affixes
from them. It is just like cutting down the branches of a tree to its stems. For example,
the stem of the words eating, eats, eaten is eat.
Search engines use stemming for indexing the words. That’s why rather than storing all
forms of a word, a search engine can store only the stems. In this way, stemming reduces
the size of the index and increases retrieval accuracy.
PorterStemmer class
NLTK has PorterStemmer class with the help of which we can easily implement Porter
Stemmer algorithms for the word we want to stem. This class knows several regular word
forms and suffixes with the help of which it can transform the input word to a final stem.
The resulting stem is often a shorter word having the same root meaning. Let us see an
example:
25
Natural Language Processing Toolkit
import nltk
Now, import the PorterStemmer class to implement the Porter Stemmer algorithm.
word_stemmer = PorterStemmer()
word_stemmer.stem('writing')
Output
'write'
word_stemmer.stem('eating')
Output
'eat'
word_stemmer = PorterStemmer()
word_stemmer.stem('writing')
Output
'write'
LancasterStemmer class
26
Natural Language Processing Toolkit
NLTK has LancasterStemmer class with the help of which we can easily implement
Lancaster Stemmer algorithms for the word we want to stem. Let us see an example:
import nltk
Lanc_stemmer = LancasterStemmer()
Lanc_stemmer.stem('eats')
Output
'eat'
Lanc_stemmer = LancasterStemmer()
Lanc_stemmer.stem('eats')
Output
'eat'
RegexpStemmer class
NLTK has RegexpStemmer class with the help of which we can easily implement Regular
Expression Stemmer algorithms. It basically takes a single regular expression and removes
any prefix or suffix that matches the expression. Let us see an example:
27
Natural Language Processing Toolkit
import nltk
Now, import the RegexpStemmer class to implement the Regular Expression Stemmer
algorithm
Next, create an instance of RegexpStemmer class and provides the suffix or prefix you
want to remove from the word as follows:
Reg_stemmer = RegexpStemmer(‘ing’)
Reg_stemmer.stem('eating')
Output
'eat'
Reg_stemmer.stem('ingeat')
Output
'eat'
Reg_stemmer.stem('eats')
Output
'eat'
Reg_stemmer = RegexpStemmer()
Reg_stemmer.stem('ingeat')
Output
'eat'
28
Natural Language Processing Toolkit
SnowballStemmer class
NLTK has SnowballStemmer class with the help of which we can easily implement
Snowball Stemmer algorithms. It supports 15 non-English languages. In order to use this
steaming class, we need to create an instance with the name of the language we are using
and then call the stem() method. Let us see an example:
import nltk
SnowballStemmer.languages
Output
('arabic',
'danish',
'dutch',
'english',
'finnish',
'french',
'german',
'hungarian',
'italian',
'norwegian',
'porter',
'portuguese',
'romanian',
'russian',
'spanish',
'swedish')
Next, create an instance of SnowballStemmer class with the language you want to use.
Here, we are creating the stemmer for ‘French’ language.
French_stemmer = SnowballStemmer(‘french’)
29
Natural Language Processing Toolkit
Now, call the stem() method and input the word you want to stem.
French_stemmer.stem (‘Bonjoura’)
Output
'bonjour'
French_stemmer = SnowballStemmer(‘french’)
French_stemmer.stem (‘Bonjoura’)
Output
'bonjour'
What is Lemmatization?
Lemmatization technique is like stemming. The output we will get after lemmatization is
called ‘lemma’, which is a root word rather than root stem, the output of stemming. After
lemmatization, we will be getting a valid word that means the same thing.
NLTK provides WordNetLemmatizer class which is a thin wrapper around the wordnet
corpus. This class uses morphy() function to the WordNet CorpusReader class to find
a lemma. Let us understand it with an example:
Example
import nltk
lemmatizer = WordNetLemmatizer()
Now, call the lemmatize() method and input the word of which you want to find lemma.
30
Natural Language Processing Toolkit
lemmatizer.lemmatize('eating')
Output
'eating'
lemmatizer.lemmatize('books')
Output
'book'
lemmatizer = WordNetLemmatizer()
lemmatizer.lemmatize('books')
Output
'book'
import nltk
word_stemmer = PorterStemmer()
word_stemmer.stem('believes')
Output
31
Natural Language Processing Toolkit
believ
import nltk
lemmatizer = WordNetLemmatizer()
Output
belief
The output of both programs tells the major difference between stemming and
lemmatization. PorterStemmer class chops off the ‘es’ from the word. On the other hand,
WordNetLemmatizer class finds a valid word. In simple words, stemming technique only
looks at the form of the word whereas lemmatization technique looks at the meaning of
the word. It means after applying lemmatization, we will always get a valid word.
32
7. NLTK ― Word Replacement Natural Language Processing Toolkit
But why we needed word replacement? Suppose if we talk about tokenization, then it is
having issues with contractions (like can’t, won’t, etc.). So, to handle such issues we need
word replacement. For example, we can replace contractions with their expanded forms.
Example
import re
from nltk.corpus import wordnet
R_patterns = [
(r'won\'t', 'will not'),
(r'can\'t', 'cannot'),
(r'i\'m', 'i am'),
r'(\w+)\'ll', '\g<1> will'),
(r'(\w+)n\'t', '\g<1> not'),
(r'(\w+)\'ve', '\g<1> have'),
(r'(\w+)\'s', '\g<1> is'),
(r'(\w+)\'re', '\g<1> are'),
class REReplacer(object):
return s
Save this python program (say repRE.py) and run it from python command prompt. After
running it, import REReplacer class when you want to replace words. Let us see how.
rep_word = REReplacer()
Output:
'I will not do it'
Output:
'I cannot do it'
R_patterns = [
(r'won\'t', 'will not'),
(r'can\'t', 'cannot'),
(r'i\'m', 'i am'),
34
Natural Language Processing Toolkit
class REReplacer(object):
return s
Now once you saved the above program and run it, you can import the class and use it as
follows:
rep_word = REReplacer()
Output
Example
35
Natural Language Processing Toolkit
rep_word = REReplacer()
Output:
['I', 'wo', "n't", 'be', 'able', 'to', 'do', 'this', 'now']
Output:
['I', 'will', 'not', 'be', 'able', 'to', 'do', 'this', 'now']
In the above Python recipe, we can easily understand the difference between the output
of word tokenizer without and with using regular expression replace.
Example
import re
from nltk.corpus import wordnet
Now, create a class that can be used for removing the repeating words:
class Rep_word_removal(object):
def __init__(self):
self.repeat_regexp = re.compile(r'(\w*)(\w)\2(\w*)')
self.repl = r'\1\2\3'
36
Natural Language Processing Toolkit
return word
if repl_word != word:
return self.replace(repl_word)
else:
return repl_word
Save this python program (say removalrepeat.py) and run it from python command
prompt. After running it, import Rep_word_removal class when you want to remove the
repeating words. Let us see how?
rep_word = Rep_word_removal()
rep_word.replace ("Hiiiiiiiiiiiiiiiiiiiii")
Output:
'Hi'
rep_word.replace("Hellooooooooooooooo")
Output:
'Hello'
class Rep_word_removal(object):
def __init__(self):
self.repeat_regexp = re.compile(r'(\w*)(\w)\2(\w*)')
self.repl = r'\1\2\3'
if wordnet.synsets(word):
return word
if replace_word != word:
return self.replace(replace_word)
else:
return replace_word
Now once you saved the above program and run it, you can import the class and use it as
follows:
rep_word = Rep_word_removal()
rep_word.replace ("Hiiiiiiiiiiiiiiiiiiiii")
Output
'Hi'
38
8. NLTK ― Synonym & Antonym Replacement Natural Language Processing Toolkit
Example
import re
from nltk.corpus import wordnet
class word_syn_replacer(object):
def __init__(self, word_map):
self.word_map = word_map
Save this python program (say replacesyn.py) and run it from python command prompt.
After running it, import word_syn_replacer class when you want to replace words with
common synonyms. Let us see how.
rep_syn.replace(‘bday’)
Output
'birthday'
39
Natural Language Processing Toolkit
class word_syn_replacer(object):
def __init__(self, word_map):
self.word_map = word_map
Now once you saved the above program and run it, you can import the class and use it as
follows:
rep_syn.replace(‘bday’)
Output
'birthday'
The disadvantage of the above method is that we should have to hardcode the synonyms
in a Python dictionary. We have two better alternatives in the form of CSV and YAML file.
We can save our synonym vocabulary in any of the above-mentioned files and can
construct word_map dictionary from them. Let us understand the concept with the help
of examples.
Example
import csv
40
Natural Language Processing Toolkit
class CSVword_syn_replacer(word_syn_replacer):
def __init__(self, fname):
word_map = {}
super(Csvword_syn_replacer, self).__init__(word_map)
After running it, import CSVword_syn_replacer class when you want to replace words with
common synonyms. Let us see how?
rep_syn.replace(‘bday’)
Output
'birthday'
class CSVword_syn_replacer(word_syn_replacer):
def __init__(self, fname):
word_map = {}
super(Csvword_syn_replacer, self).__init__(word_map)
41
Natural Language Processing Toolkit
Now once you saved the above program and run it, you can import the class and use it as
follows:
rep_syn.replace(‘bday’)
Output
'birthday'
Example
import yaml
class YAMLword_syn_replacer(word_syn_replacer):
def __init__(self, fname):
word_map = yaml.load(open(fname))
super(YamlWordReplacer, self).__init__(word_map)
After running it, import YAMLword_syn_replacer class when you want to replace words
with common synonyms. Let us see how?
Output
'birthday'
42
Natural Language Processing Toolkit
class YAMLword_syn_replacer(word_syn_replacer):
def __init__(self, fname):
word_map = yaml.load(open(fname))
super(YamlWordReplacer, self).__init__(word_map)
Now once you saved the above program and run it, you can import the class and use it as
follows:
rep_syn.replace(‘bday’)
Output
'birthday'
Antonym replacement
As we know that an antonym is a word having opposite meaning of another word, and the
opposite of synonym replacement is called antonym replacement. In this section, we will
be dealing with antonym replacement, i.e., replacing words with unambiguous antonyms
by using WordNet. In the example below, we will be creating a class named
word_antonym_replacer which have two methods, one for replacing the word and other
for removing the negations.
Example
class word_antonym_replacer(object):
def replace(self, word, pos=None):
antonyms = set()
if len(antonyms) == 1:
return antonyms.pop()
else:
return None
i, l = 0, len(sent)
words = []
while i < l:
word = sent[i]
if ant:
words.append(ant)
i += 2
continue
words.append(word)
i += 1
return words
Save this python program (say replaceantonym.py) and run it from python command
prompt. After running it, import word_antonym_replacer class when you want to
replace words with their unambiguous antonyms. Let us see how.
Output
44
Natural Language Processing Toolkit
['beautify'']
Output
class word_antonym_replacer(object):
def replace(self, word, pos=None):
antonyms = set()
if len(antonyms) == 1:
return antonyms.pop()
else:
return None
i, l = 0, len(sent)
words = []
while i < l:
word = sent[i]
45
Natural Language Processing Toolkit
ant = self.replace(sent[i+1])
if ant:
words.append(ant)
i += 2
continue
words.append(word)
i += 1
return words
Now once you saved the above program and run it, you can import the class and use it as
follows:
Output
46
9. NLTK — Corpus Readers and Custom Corpora Natural Language Processing Toolkit
What is a corpus?
A corpus is large collection, in structured format, of machine-readable texts that have
been produced in a natural communicative setting. The word Corpora is the plural of
Corpus. Corpus can be derived in many ways as follows:
Corpus representativeness, Corpus Balance, Sampling, Corpus Size are the elements that
plays an important role while designing corpus. Some of the most popular corpus for NLP
tasks are TreeBank, PropBank, VarbNet and WordNet.
In the following Python recipe, we are going to create custom corpora which must be within
one of the paths defined by NLTK. It is so because it can be found by NLTK. In order to
avoid conflict with the official NLTK data package, let us create a custom nltk_data
directory in our home directory.
47
Natural Language Processing Toolkit
Output
True
Now, Let us check whether we have nltk_data directory in our home directory or not:
import nltk.data
path in nltk.data.path
Output
True
As we have got the output True, means we have nltk_data directory in our home
directory.
Now we will make a wordlist file, named wordfile.txt and put it in a folder, named corpus
in nltk_data directory (~/nltk_data/corpus/wordfile.txt) and will load it by using
nltk.data.load:
import nltk.data
Output
b’tutorialspoint\n’
Corpus readers
NLTK provides various CorpusReader classes. We are going to cover them in the following
python recipes.
tutorialspoint
Online
Free
Tutorials
Now Let us instantiate a WordListCorpusReader class producing the list of words from
our created file ‘list’.
48
Natural Language Processing Toolkit
reader_corpus.words()
Output
One of the simplest formats for a tagged corpus is of the form ‘word/tag’like following
excerpt from the brown corpus:
In the above excerpt, each word has a tag which denotes its POS. For example, vb refers
to a verb.
Now Let us instantiate a TaggedCorpusReader class producing POS tagged words form
the file ‘list.pos’, which has the above excerpt.
reader_corpus.tagged_words()
Output
For example, we have the following excerpt from the tagged treebank corpus:
49
Natural Language Processing Toolkit
In the above excerpt, every chunk is a noun phrase but the words that are not in brackets
are part of the sentence tree and not part of any noun phrase subtree.
reader_corpus.chunked_words()
Output
For example, the brown corpus has several different categories. Let us find out them with
the help of following Python code:
Output
One of the easiest ways to categorize a corpus is to have one file for every category. For
example, let us see the two excerpts from the movie_reviews corpus:
movie_pos.txt
the thin red line is flawed but it provokes.
movie_neg.txt
a big-budget and glossy production cannot make up for a lack of spontaneity that
permeates their tv show.
So, from above two files, we have two categories namely pos and neg.
50
Natural Language Processing Toolkit
reader_corpus.categories()
reader_corpus.fileids(categories=[‘neg’])
reader_corpus.fileids(categories=[‘pos’])
Output
['neg', 'pos']
['movie_neg.txt']
['movie_pos.txt']
51
10. NLTK ― Basics of Part-of-Speech (POS) Natural Language Processing Toolkit
Tagging
On the other hand, if we talk about Part-of-Speech (POS) tagging, it may be defined as
the process of converting a sentence in the form of a list of words, into a list of tuples.
Here, the tuples are in the form of (word, tag). We can also call POS tagging a process of
assigning one of the parts of speech to the given word.
Following table represents the most frequent POS notification used in Penn Treebank
corpus:
7. RB Adverb
10. RP Particle
12. TO to
13. UH Interjection
18. WP Wh-pronoun
52
Natural Language Processing Toolkit
24. , Comma
import nltk
print (nltk.pos_tag(word_tokenize(sentence)))
Output
[('I', 'PRP'), ('am', 'VBP'), ('going', 'VBG'), ('to', 'TO'), ('school', 'NN')]
Chunking
Syntax Parsing
Information extraction
Machine Translation
Sentiment Analysis
53
Natural Language Processing Toolkit
Methods: TaggerI class have the following two methods which must be implemented by
all its subclasses:
tag() method: As the name implies, this method takes a list of words as input and
returns a list of tagged words as output.
evaluate() method: With the help of this method, we can evaluate the accuracy
of the tagger.
DefaultTagger class
Default tagging is performed by using DefaultTagging class, which takes the single
argument, i.e., the tag we want to apply.
54
Natural Language Processing Toolkit
Token’s list
Current token’s index
Previous token’s list, i.e., the history
Example
import nltk
exptagger = DefaultTagger('NN')
exptagger.tag(['Tutorials','Point'])
55
Natural Language Processing Toolkit
Output
In this example, we chose a noun tag because it is the most common types of words.
Moreover, DefaultTagger is also most useful when we choose the most common POS
tag.
Accuracy evaluation
The DefaultTagger is also the baseline for evaluating accuracy of taggers. That is the
reason we can use it along with evaluate() method for measuring accuracy. The
evaluate() method takes a list of tagged tokens as a gold standard to evaluate the tagger.
Following is an example in which we used our default tagger, named exptagger, created
above, to evaluate the accuracy of a subset of treebank corpus tagged sentences:
import nltk
exptagger = DefaultTagger('NN')
exptagger.evaluate (testsentences)
Output
0.13198749536374715
The output above shows that by choosing NN for every tag, we can achieve around 13%
accuracy testing on 1000 entries of the treebank corpus.
import nltk
56
Natural Language Processing Toolkit
exptagger = DefaultTagger('NN')
Output
[[('Hi', 'NN'), (',', 'NN')], [('How', 'NN'), ('are', 'NN'), ('you', 'NN'),
('?', 'NN')]]
In the above example, we used our earlier created default tagger named exptagger.
Un-tagging a sentence
We can also un-tag a sentence. NLTK provides nltk.tag.untag() method for this purpose.
It will take a tagged sentence as input and provides a list of words without tags. Let us
see an example:
import nltk
Output
['Tutorials', 'Point']
57
11. NLTK ― Unigram Tagger Natural Language Processing Toolkit
58
Natural Language Processing Toolkit
The result of context() method will be the word token which is further used to create
the model. Once the model is created, the word token is also used to look up the
best tag.
In this way, UnigramTagger will build a context model from the list of tagged
sentences.
Example
First import the UniframTagger module from nltk:
Next, import the corpus you want to use. Here we are using treebank corpus:
Now, take the sentences for training purpose. We are taking first 2500 sentences for
training purpose and will tag them:
train_sentences = treebank.tagged_sents()[:2500]
Uni_tagger = UnigramTagger(train_sentences)
Take some sentences, either equal to or less taken for training purpose i.e. 2500, for
testing purpose. Here we are taking first 1500 for testing purpose:
test_sentences = treebank.tagged_sents()[1500:]
Uni_tagger.evaluate(test_sents)
Output
0.8942306156033808
Here, we got around 89 percent accuracy for a tagger that uses single word lookup to
determine the POS tag.
59
Natural Language Processing Toolkit
train_sentences = treebank.tagged_sents()[:2500]
Uni_tagger = UnigramTagger(train_sentences)
test_sentences = treebank.tagged_sents()[1500:]
Uni_tagger.evaluate(test_sentences)
Output
0.8942306156033808
We can override this context model by passing another simple model to the
UnigramTagger class instead of passing training set. Let us understand it with the help
of an easy example below:
Example
Override_tagger.tag(treebank.sents()[0])
60
Natural Language Processing Toolkit
Output
[('Pierre', None),
('Vinken', 'NN'),
(',', None),
('61', None),
('years', None),
('old', None),
(',', None),
('will', None),
('join', None),
('the', None),
('board', None),
('as', None),
('a', None),
('nonexecutive', None),
('director', None),
('Nov.', None),
('29', None),
('.', None)]
As our model contains ‘Vinken’ as the only context key, you can observe from the output
above that only this word got tag and every other word has None as a tag.
train_sentences = treebank.tagged_sents()[:2500]
test_sentences = treebank.tagged_sents()[1500:]
61
Natural Language Processing Toolkit
Uni_tagger.evaluate(test_sentences)
Output
0.7357651629613641
62
12. NLTK — Combining Taggers Natural Language Processing Toolkit
Combining Taggers
Combining taggers or chaining taggers with each other is one of the important features of
NLTK. The main concept behind combining taggers is that, in case if one tagger doesn’t
know how to tag a word, it would be passed to the chained tagger. To achieve this purpose,
SequentialBackoffTagger provides us the Backoff tagging feature.
Backoff Tagging
As told earlier, backoff tagging is one of the important features of
SequentialBackoffTagger, which allows us to combine taggers in a way that if one
tagger doesn’t know how to tag a word, the word would be passed to the next tagger and
so on until there are no backoff taggers left to check.
In the example below, we are taking DefaulTagger as the backoff tagger in the above
Python recipe with which we have trained the UnigramTagger.
Example
In this example, we are using DefaulTagger as the backoff tagger. Whenever the
UnigramTagger is unable to tag a word, backoff tagger, i.e. DefaultTagger, in our case,
will tag it with ‘NN’.
train_sentences = treebank.tagged_sents()[:2500]
back_tagger = DefaultTagger('NN')
test_sentences = treebank.tagged_sents()[1500:]
Uni_tagger.evaluate(test_sentences)
Output
0.9061975746536931
From the above output, you can observe that by adding a backoff tagger the accuracy is
increased by around 2%.
Example
import pickle
f = open('Uni_tagger.pickle','wb')
pickle.dump(Uni_tagger, f)
f.close()
f = open('Uni_tagger.pickle','rb')
Uni_tagger = pickle.load(f)
NgramTagger Class
From the hierarchy diagram discussed in previous unit, UnigramTagger is inherited from
NgramTagger class but we have two more subclasses of NgarmTagger class:
BigramTagger subclass
Actually an ngram is a subsequence of n items, hence, as name implies, BigramTagger
subclass looks at the two items. First item is the previous tagged word and the second
item is current tagged word.
TrigramTagger subclass
64
Natural Language Processing Toolkit
On the same note of BigramTagger, TrigramTagger subclass looks at the three items
i.e. two previous tagged words and one current tagged word.
train_sentences = treebank.tagged_sents()[:2500]
Bi_tagger = BigramTagger(train_sentences)
test_sentences = treebank.tagged_sents()[1500:]
Bi_tagger.evaluate(test_sentences)
Output
0.44669191071913594
train_sentences = treebank.tagged_sents()[:2500]
Tri_tagger = TrigramTagger(train_sentences)
test_sentences = treebank.tagged_sents()[1500:]
Tri_tagger.evaluate(test_sentences)
65
Natural Language Processing Toolkit
Output
0.41949863394526193
You can compare the performance of UnigramTagger, we used previously (gave around
89% accuracy) with BigramTagger (gave around 44% accuracy) and TrigramTagger (gave
around 41% accuracy). The reason is that Bigram and Trigram taggers cannot learn
context from the first word(s) in a sentence. On the other hand, UnigramTagger class
doesn’t care about the previous context and guesses the most common tag for each word,
hence able to have high baseline accuracy.
return backoff
Example
train_sentences = treebank.tagged_sents()[:2500]
back_tagger = DefaultTagger('NN')
66
Natural Language Processing Toolkit
test_sentences = treebank.tagged_sents()[1500:]
Combine_tagger.evaluate(test_sentences)
Output
0.9234530029238365
From the above output, we can see it increases the accuracy by around 3%.
67
13. NLTK ― More NLTK Taggers Natural Language Processing Toolkit
Affix Tagger
One another important class of ContextTagger subclass is AffixTagger. In AffixTagger
class, the context is either prefix or suffix of a word. That is the reason AffixTagger class
can learn tags based on fixed-length substrings of the beginning or ending of a word.
To make it clearer, in the example below, we will be using AffixTagger class on tagged
treebank sentences.
Example 1
In this example, AffixTagger will learn word’s prefix because we are not specifying any
value for affix_length argument. The argument will take default value 3:
train_sentences = treebank.tagged_sents()[:2500]
Prefix_tagger = AffixTagger(train_sentences)
test_sentences = treebank.tagged_sents()[1500:]
Prefix_tagger.evaluate(test_sentences)
68
Natural Language Processing Toolkit
Output
0.2800492099250667
Let us see in the example below what will be the accuracy when we provide value 4 to
affix_length argument:
train_sentences = treebank.tagged_sents()[:2500]
test_sentences = treebank.tagged_sents()[1500:]
Prefix_tagger.evaluate(test_sentences)
Output
0.18154947354966527
Example 2
In this example, AffixTagger will learn word’s suffix because we will specify negative value
for affix_length argument.
train_sentences = treebank.tagged_sents()[:2500]
test_sentences = treebank.tagged_sents()[1500:]
Suffix_tagger.evaluate(test_sentences)
69
Natural Language Processing Toolkit
Output
0.2800492099250667
Brill Tagger
Brill Tagger is a transformation-based tagger. NLTK provides BrillTagger class which is
the first tagger that is not a subclass of SequentialBackoffTagger. Opposite to it, a
series of rules to correct the results of an initial tagger is used by BrillTagger.
templates = [
brill.Template(brill.Pos([-1])),
brill.Template(brill.Pos([1])),
brill.Template(brill.Pos([-2])),
brill.Template(brill.Pos([2])),
brill.Template(brill.Pos([-2, -1])),
brill.Template(brill.Pos([1, 2])),
brill.Template(brill.Pos([-3, -2, -1])),
brill.Template(brill.Pos([1, 2, 3])),
brill.Template(brill.Pos([-1]), brill.Pos([1])),
brill.Template(brill.Word([-1])),
brill.Template(brill.Word([1])),
brill.Template(brill.Word([-2])),
brill.Template(brill.Word([2])),
brill.Template(brill.Word([-2, -1])),
brill.Template(brill.Word([1, 2])),
brill.Template(brill.Word([-3, -2, -1])),
brill.Template(brill.Word([1, 2, 3])),
brill.Template(brill.Word([-1]), brill.Word([1])),
]
70
Natural Language Processing Toolkit
Example
For this example, we will be using combine_tagger which we created while combing
taggers (in the previous recipe) from a backoff chain of NgramTagger classes, as
initial_tagger. First, let us evaluate the result using Combine.tagger and then use that
as initial_tagger to train brill tagger.
train_sentences = treebank.tagged_sents()[:2500]
71
Natural Language Processing Toolkit
back_tagger = DefaultTagger('NN')
test_sentences = treebank.tagged_sents()[1500:]
Combine_tagger.evaluate(test_sentences)
Output
0.9234530029238365
Now, let us see the evaluation result when Combine_tagger is used as initial_tagger
to train brill tagger:
Output
0.9246832510505041
We can notice that BrillTagger class has slight increased accuracy over the
Combine_tagger.
train_sentences = treebank.tagged_sents()[:2500]
72
Natural Language Processing Toolkit
back_tagger = DefaultTagger('NN')
test_sentences = treebank.tagged_sents()[1500:]
Combine_tagger.evaluate(test_sentences)
Output
0.9234530029238365
0.9246832510505041
TnT Tagger
TnT Tagger, stands for Trigrams’nTags, is a statistical tagger which is based on second
order Markov models.
First based on training data, TnT tegger maintains several internal FreqDist and
ConditionalFreqDist instances.
After that unigrams, bigrams and trigrams will be counted by these frequency
distributions.
That’s why instead of constructing a backoff chain of NgramTagger, it uses all the ngram
models together to choose the best tag for each word. Let us evaluate the accuracy with
TnT tagger in the following example:
train_sentences = treebank.tagged_sents()[:2500]
73
Natural Language Processing Toolkit
tnt_tagger = tnt.TnT()
tnt_tagger.train(train_sentences)
test_sentences = treebank.tagged_sents()[1500:]
tnt_tagger.evaluate(test_sentences)
Output
0.9165508316157791
Please note that we need to call train() before evaluate() otherwise we will get 0%
accuracy.
74
14. NLTK ― Parsing Natural Language Processing Toolkit
In this sense, we can define parsing or syntactic analysis or syntax analysis as follows:
It may be defined as the process of analyzing the strings of symbols in natural language
conforming to the rules of formal grammar.
We can understand the relevance of parsing in NLP with the help of following points:
75
Natural Language Processing Toolkit
In deep parsing, the search strategy will It is the task of parsing a limited part of
give a complete syntactic structure to a the syntactic information from the given
sentence. task.
It is suitable for complex NLP applications. It can be used for less complex NLP
applications.
Dialogue systems and summarization are Information extraction and text mining are
the examples of NLP applications where the examples of NLP applications where
deep parsing is used. deep parsing is used.
Shift-reduce parser
Following are some important points about shift-reduce parser:
It tries to find a sequence of words and phrases that correspond to the right-hand
side of a grammar production and replaces them with the left-hand side of the
production.
The above attempt to find a sequence of word continues until the whole sentence
is reduced.
In other simple words, shift-reduce parser starts with the input symbol and tries to
construct the parser tree up to the start symbol.
Chart parser
Following are some important points about chart parser:
76
Natural Language Processing Toolkit
Regexp parser
Regexp parsing is one of the mostly used parsing technique. Following are some important
points about Regexp parser:
As the name implies, it uses a regular expression defined in the form of grammar
on top of a POS-tagged string.
It basically uses these regular expressions to parse the input sentences and
generate a parse tree out of this.
import nltk
sentence = [("a", "DT"),("clever",
"JJ"),("fox","NN"),("was","VBP"),("jumping","VBP"),("over","IN"),("the","DT"),(
"wall","NN")]
grammar = "NP:{<DT>?<JJ>*<NN>}"
Reg_parser=nltk.RegexpParser(grammar)
Reg_parser.parse(sentence)
Output=Reg_parser.parse(sentence)
Output.draw()
Output
77
Natural Language Processing Toolkit
Dependency Parsing
Dependency Parsing (DP), a modern parsing mechanism, whose main concept is that each
linguistic unit i.e. words relates to each other by a direct link. These direct links are actually
‘dependencies’ in linguistic. For example, the following diagram shows dependency
grammar for the sentence “John can hit the ball”.
NLTK Package
We have following the two ways to do dependency parsing with NLTK:
Stanford parser
This is another way we can do dependency parsing with NLTK. Stanford parser is a state-
of-the-art dependency parser. NLTK has a wrapper around it. To use it we need to
download following two things:
Language model for desired language. For example, English language model.
Example
Once you downloaded the model, we can use it through NLTK as follows:
78
Natural Language Processing Toolkit
path_models_jar = 'path_to/stanford-parser-full-2014-08-27/stanford-parser-
3.4.1-models.jar'
dep_parser = StanfordDependencyParser(path_to_jar=path_jar,
path_to_models_jar=path_models_jar)
result = dep_parser.raw_parse('I shot an elephant in my sleep')
depndency = result.next()
list(dependency.triples())
Output
79
15. NLTK ― Chunking & Information Extraction Natural Language Processing Toolkit
What is Chunking?
Chunking, one of the important processes in natural language processing, is used to
identify parts of speech (POS) and short phrases. In other simple words, with chunking,
we can get the structure of the sentence. It is also called partial parsing.
Moreover, we can also define patterns for what kind of words should not be in a chunk and
these unchunked words are known as chinks.
Implementation example
In the example below, along with the result of parsing the sentence “the book has many
chapters”, there is a grammar for noun phrases that combines both a chunk and a chink
pattern:
import nltk
sentence = [("the", "DT"),("book",
"NN"),("has","VBZ"),("many","JJ"),("chapters","NNS")]
chunker=nltk.RegexpParser(r'''
NP:{<DT><NN.*><.*>*<NN.*>}
}<VB.*>{
''')
chunker.parse(sentence)
Output=chunker.parse(sentence)
Output.draw()
Output
80
Natural Language Processing Toolkit
As seen above, the pattern for specifying a chunk is to use curly braces as follows:
{<DT><NN>}
}<VB>{.
Now, for a particular phrase type, these rules can be combined into a grammar.
Information Extraction
We have gone through taggers as well as parsers that can be used to build information
extraction engine. Let us see a basic information extraction pipeline:
81
Natural Language Processing Toolkit
Business intelligence
Resume harvesting
Media analysis
Sentiment detection
Patent search
Email scanning
82
Natural Language Processing Toolkit
Example
Import nltk
file=open (# provide here the absolute path for the file of text for which we
want NER)
data_text=file.read()
sentences = nltk.sent_tokenize(data_text)
tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in
sentences]
tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_
sentences]
for sent in tagged_sentences:
print nltk.ne_chunk(sent)
Some of the modified Named-entity recognition (NER) can also be used to extract entities
such as product names, bio-medical entities, brand name and much more.
Relation extraction
Relation extraction, another commonly used information extraction operation, is the
process of extracting the different relationships between various entities. There can be
different relationships like inheritance, synonyms, analogous, etc., whose definition
depends on the information need. For example, suppose if we want to look for write of a
book then the authorship would be a relation between the author name and book name.
Example
In the following example, we use the same IE pipeline, as shown in the above diagram,
that we used till Named-entity relation (NER) and extend it with a relation pattern based
on the NER tags.
import nltk
import re
IN = re.compile(r'.*\bin\b(?!\b.+ing)')
for doc in nltk.corpus.ieer.parsed_docs('NYT_19980315'):
for rel in nltk.sem.extract_rels('ORG', 'LOC', doc, corpus='ieer', pattern
= IN):
print(nltk.sem.rtuple(rel))
83
Natural Language Processing Toolkit
Output
In the above code, we have used an inbuilt corpus named ieer. In this corpus, the
sentences are tagged till Named-entity relation (NER). Here we only need to specify the
relation pattern that we want and the kind of NER we want the relation to define. In our
example, we defined relationship between an organization and a location. We extracted
all the combinations of these patterns.
84
16. NLTK ― Transforming Chunks Natural Language Processing Toolkit
Here the most significant words are ‘movie’ and ‘good’. Other words, ‘the’ and ‘was’ both
are useless or insignificant. It is because without them also we can get the same meaning
of the phrase. ‘Good movie’.
In the following python recipe, we will learn how to remove useless/insignificant words
and keep the significant words with the help of POS tags.
Example
First, by looking through treebank corpus for stopwords we need to decide which part-
of-speech tags are significant and which are not. Let us see the following table of
insignificant words and tags:
Word Tag
a DT
All PDT
An DT
And CC
Or CC
That WDT
The DT
85
Natural Language Processing Toolkit
From the above table, we can see other than CC, all the other tags end with DT which
means we can filter out insignificant words by looking at the tag’s suffix.
For this example, we are going to use a function named filter() which takes a single chunk
and returns a new chunk without any insignificant tagged words. This function filters out
any tags that end with DT or CC.
import nltk
def filter(chunk, tag_suffixes=['DT', 'CC']):
significant = []
for word, tag in chunk:
ok = True
for suffix in tag_suffixes:
if tag.endswith(suffix):
ok = False
break
if ok:
significant.append((word, tag))
return (significant)
Now, let us use this function filter() in our Python recipe to delete insignificant words:
Output
Verb Correction
Many times, in real-world language we see incorrect verb forms. For example, ‘is you fine?’
is not correct. The verb form is not correct in this sentence. The sentence should be ‘are
you fine?’ NLTK provides us the way to correct such mistakes by creating verb correction
mappings. These correction mappings are used depending on whether there is a plural or
singular noun in the chunk.
Example
To implement Python recipe, we first need to need define verb correction mappings. Let
us create two mapping as follows:
plural= {
86
Natural Language Processing Toolkit
singular = {
('are', 'VBP'): ('is', 'VBZ'),
('were', 'VBD'): ('was', 'VBD')
}
As seen above, each mapping has a tagged verb which maps to another tagged verb. The
initial mappings in our example cover the basic of mappings is to are, was to were, and
vice versa.
Next, we will define a function named verbs(), in which you can pass a chink with incorrect
verb form and ‘ll get a corrected chunk back. To get it done, verb() function uses a helper
function named index_chunk() which will search the chunk for the position of the first
tagged word.
def verbs(chunk):
vbidx = index_chunk(chunk, tag_startswith('VB'))
if vbidx is None:
return chunk
verb, vbtag = chunk[vbidx]
nnpred = tag_startswith('NN')
nnidx = index_chunk(chunk, nnpred, start=vbidx+1)
if nnidx is None:
87
Natural Language Processing Toolkit
Save these functions in a Python file in your local directory where Python or Anaconda is
installed and run it. I have saved it as verbcorrect.py.
Now, let us call verbs() function on a POS tagged is you fine chunk:
Output
Example
To achieve this we are defining a function named eliminate_passive() that will swap the
right-hand side of the chunk with the left-hand side by using the verb as the pivot point.
In order to find the verb to pivot around, it will also use the index_chunk() function
defined above.
def eliminate_passive(chunk):
def vbpred(wt):
word, tag = wt
return tag != 'VBG' and tag.startswith('VB') and len(tag) > 2
vbidx = index_chunk(chunk, vbpred)
if vbidx is None:
return chunk
return chunk[vbidx+1:] + chunk[:vbidx]
88
Natural Language Processing Toolkit
Now, let us call eliminate_passive() function on a POS tagged the tutorial was great
chunk:
Output
Example
To achieve this we are defining a function named swapping_cardinals() that will swap
any cardinal that occurs immediately after a noun with the noun. With this the cardinal
will occur immediately before the noun. In order to do equality comparison with the given
tag, it uses a helper function which we named as tag_eql().
def tag_eql(tag):
def f(wt):
return wt[1] == tag
return f
89
Natural Language Processing Toolkit
Output
90
17. NLTK ― Transforming Trees Natural Language Processing Toolkit
Example
from nltk.corpus import treebank_chunk
tree = treebank_chunk.chunked_sents()[2]
' '.join([w for w, t in tree.leaves()])
Output
'Rudolph Agnew , 55 years old and former chairman of Consolidated Gold Fields
PLC , was named a nonexecutive director of this British industrial conglomerate
.'
Example
To achieve this, we are defining a function named deeptree_flat() that will take a single
Tree and will return a new Tree that keeps only the lowest level trees. In order to do most
of the work, it uses a helper function which we named as childtree_flat().
91
Natural Language Processing Toolkit
children.append(Tree(t.label(), t.pos()))
else:
children.extend(flatten_childtrees([c for c in t]))
return children
def deeptree_flat(tree):
return Tree(tree.label(), flatten_childtrees([c for c in tree]))
Now, let us call deeptree_flat() function on 3rd parsed sentence, which is deep tree of
nested phrases, from the treebank corpus. We saved these functions in a file named
deeptree.py.
Output
Example
To achieve this, we are defining a function named tree_shallow() that will eliminate all
the nested subtrees by keeping only the top subtree labels.
92
Natural Language Processing Toolkit
children.append(Tree(t.label(), t.pos()))
return Tree(tree.label(), children)
Now, let us call tree_shallow() function on 3rd parsed sentence, which is deep tree of
nested phrases, from the treebank corpus. We saved these functions in a file named
shallowtree.py.
Output
We can see the difference with the help of getting the height of the trees:
Output
Output
93
Natural Language Processing Toolkit
Example
To achieve this we are defining a function named tree_convert() that takes following two
arguments:
Tree to convert
A label conversion mapping
This function will return a new Tree with all matching labels replaced based on the values
in the mapping.
Now, let us call tree_convert() function on 3rd parsed sentence, which is deep tree of
nested phrases, from the treebank corpus. We saved these functions in a file named
converttree.py.
Output
95
18. NLTK ― Text Classification Natural Language Processing Toolkit
Binary Classifier
As name implies, binary classifier will decide between two labels. For example, positive or
negative. In this the piece of text or document can either be one label or another, but not
both.
Multi-label Classifier
Opposite to binary classifier, multi-label classifier can assign one or more labels to a piece
of text or document.
It is an instance with a known class label. Without associated label, we can call it an
instance.
Used for training a classification algorithm. Once trained, classification algorithm can
classify an unlabeled feature set.
96
Natural Language Processing Toolkit
constructs a word presence feature set from all the words of an instance. The concept
behind this method is that it doesn’t care about how many times a word occurs or about
the order of the words, it only cares weather the word is present in a list of words or not.
Example
For this example, we are going to define a function named bow():
def bow(words):
return dict([(word, True) for word in words])
Now, let us call bow() function on words. We saved this functions in a file named
bagwords.py.
Output
Training classifiers
In previous sections, we learned how to extract features from the text. So now we can
train a classifier. The first and easiest classifier is NaiveBayesClassifier class.
Here,
P(A|B): It is also called the posterior probability i.e. the probability of first event i.e. A to
occur given that second event i.e. B occurred.
P(B|A): It is the probability of second event i.e. B to occur after first event i.e. A occurred.
P(A), P(B): It is also called prior probability i.e. the probability of first event i.e. A or
second event i.e. B to occur.
To train Naïve Bayes classifier, we will be using the movie_reviews corpus from NLTK.
This corpus has two categories of text, namely: pos and neg. These categories make a
classifier trained on them a binary classifier. Every file in the corpus is composed of two,
97
Natural Language Processing Toolkit
one is positive movie review and other is negative movie review. In our example, we are
going to use each file as a single instance for both training and testing the classifier.
Example
For training classifier, we need a list of labeled feature sets, which will be in the form
[(featureset, label)]. Here the featureset variable is a dict and label is the known class
label for the featureset. We are going to create a function named label_corpus() which
will take a corpus named movie_reviews and also a function named feature_detector,
which defaults to bag of words. It will construct and returns a mapping of the form,
{label: [featureset]}. After that we will use this mapping to create a list of labeled training
instances and testing instances.
import collections
With the help of above function we will get a mapping {label:fetaureset}. Now we are
going to define one more function named split() that will take a mapping returned from
label_corpus() function and splits each list of feature sets into labeled training as well
as testing instances.
Output
['neg', 'pos']
98
Natural Language Processing Toolkit
lfeats = label_feats_from_corpus(movie_reviews)
lfeats.keys()
Output
dict_keys(['neg', 'pos'])
Output
1500
len(test_feats)
Output
500
We have seen that in movie_reviews corpus, there are 1000 pos files and 1000 neg files.
We also end up with 1500 labeled training instances and 500 labeled testing instances.
Output
['neg', 'pos']
To train decision tree classifier, we will use the same training and testing features i.e.
train_feats and test_feats, variables we have created from movie_reviews corpus.
Example
99
Natural Language Processing Toolkit
Output
0.725
To train decision tree classifier, we will use the same training and testing features i.e.
train_feats and test_feats, variables we have created from movie_reviews corpus.
Example
To train this classifier, we will call MaxentClassifier.train() class method as follows:
Output
0.786
Scikit-learn Classifier
One of the best machine learning (ML) libraries is Scikit-learn. It actually contains all sorts
of ML algorithms for various purposes, but they all have the same fit design pattern as
follows:
Rather than accessing scikit-learn models directly, here we are going to use NLTK’s
SklearnClassifier class. This class is a wrapper class around a scikit-learn model to make
it conform to NLTK’s Classifier interface.
Step 1
100
Natural Language Processing Toolkit
Step 2
Step 3
Step 4
Output
0.885
Example
In this example, we are going to calculate precision and recall of the NaiveBayesClassifier
class we trained earlier. To achieve this we will create a function named metrics_PR()
which will take two arguments, one is the trained classifier and other is the labeled test
features. Both the arguments are same as we passed while calculating the accuracy of the
classifiers:
import collections
from nltk import metrics
def metrics_PR(classifier, testfeats):
refsets = collections.defaultdict(set)
testsets = collections.defaultdict(set)
for i, (feats, label) in enumerate(testfeats):
refsets[label].add(i)
observed = classifier.classify(feats)
101
Natural Language Processing Toolkit
testsets[observed].add(i)
precisions = {}
recalls = {}
for label in classifier.labels():
precisions[label] = metrics.precision(refsets[label],testsets[label])
recalls[label] = metrics.recall(refsets[label], testsets[label])
return precisions, recalls
Output
0.6713532466435213
nb_precisions['neg']
Output
0.9676271186440678
nb_recalls['pos']
Output
0.96
nb_recalls['neg']
Output
0.478
102
Natural Language Processing Toolkit
import itertools
from nltk.classify import ClassifierI
from nltk.probability import FreqDist
class Voting_classifiers(ClassifierI):
def __init__(self, *classifiers):
self._classifiers = classifiers
self._labels = sorted(set(itertools.chain(*[c.labels() for c in
classifiers])))
def labels(self):
return self._labels
def classify(self, feats):
counts = FreqDist()
for classifier in self._classifiers:
counts[classifier.classify(feats)] += 1
return counts.max()
Let us call this function to combine three classifiers and find the accuracy:
Output
['neg', 'pos']
accuracy(combined_classifier, test_feats)
Output
0.948
From the above output, we can see that the combined classifiers got highest accuracy than
the individual classifiers.
103