02 Text Processing PDF
02 Text Processing PDF
Classification
with Naïve Bayes
Parameter
Estimation and
Smoothing
Sec.13.3
P̂(wi | c j ) =
count(wi , c j ) fraction of times word wi appears
∑ count(w, c j ) among all words in documents of topic cj
w∈V
count(wi , c) +1
P̂(wi | c) =
∑ (count(w, c))+1)
w∈V
count(wi , c) +1
=
# &
%% ∑ count(w, c)(( + V
$ w∈V '
Multinomial Naïve Bayes: Learning
Text Classification:
Evaluation
Cross-Validation
Iteration
Text Classification:
Practical Issues
Sec. 15.3.1
21
Sec. 15.3.1
No training data?
Manually written rules
If (wheat or grain) and not (whole or bread) then
Categorize as grain
22
Sec. 15.3.1
23
Sec. 15.3.1
24
Sec. 15.3.1
25
Brill and Banko on spelling correction
Basic Text
Processing
Regular Expressions
Regular expressions
• A formal language for specifying text strings
• How can we search for any of these?
• woodchuck
• woodchucks
• Woodchuck
• Woodchucks
Regular Expressions: Disjunctions
• Letters inside square brackets []
Pattern Matches
[wW]oodchuck Woodchuck, woodchuck
[1234567890] Any digit
• Ranges [A-Z]
Pattern Matches
[A-Z] An upper case letter Drenched Blossoms
[a-z] A lower case letter my beans were impatient
[0-9] A single digit Chapter 1: Down the Rabbit Hole
Regular Expressions: Negation in Disjunction
• Negations [^Ss]
• Carat means negation only when first in []
Pattern Matches
[^A-Z] Not an upper case letter Oyfn pripetchik
[^Ss] Neither ‘S’ nor ‘s’ I have no exquisite reason”
[^e^] Neither e nor ^ Look here
a^b The pattern a carat b Look up a^b now
Regular Expressions: More Disjunction
• Woodchucks is another name for groundhog!
• The pipe | for disjunction
Pattern Matches
groundhog|woodchuck
yours|mine yours
mine
a|b|c = [abc]
[gG]roundhog|[Ww]oodchuck
Regular Expressions: ? * + .
Pattern Matches
colou?r Optional color colour
previous char
oo*h! 0 or more of oh! ooh! oooh! ooooh!
previous char
o+h! 1 or more of oh! ooh! oooh! ooooh!
previous char
Stephen C Kleene
baa+ baa baaa baaaa baaaaa
beg.n begin begun begun beg3n Kleene *, Kleene +
Regular Expressions: Anchors ^ $
Pattern Matches
^[A-Z] Palo Alto
^[^A-Za-z] 1 “Hello”
\.$ The end.
.$ The end? The end!
Example
• Find me all instances of the word “the” in a text.
the
Misses capitalized examples
[tT]he
Incorrectly returns other or theology
[^a-zA-Z][tT]he[^a-zA-Z]
Errors
• The process we just went through was based on fixing
two kinds of errors
• Matching strings that we should not have matched (there,
then, other)
• False positives (Type I)
• Not matching things that we should have matched (The)
• False negatives (Type II)
Errors cont.
• In NLP we are always dealing with these kinds of
errors.
• Reducing the error rate for an application often
involves two antagonistic efforts:
• Increasing accuracy or precision (minimizing false positives)
• Increasing coverage or recall (minimizing false negatives).
Summary
• Regular expressions play a surprisingly large role
• Sophisticated sequences of regular expressions are often the first model
for any text processing text
• For many hard tasks, we use machine learning classifiers
• But regular expressions are used as features in the classifiers
• Can be very useful in capturing generalizations
36
Basic Text
Processing
Regular Expressions
Basic Text
Processing
Word tokenization
Text Normalization
• Every NLP task needs to do text
normalization:
1. Segmenting/tokenizing words in running text
2. Normalizing word formats
3. Segmenting sentences in running text
How many words?
• I do uh main- mainly business data processing
• Fragments, filled pauses
• Seuss’s cat in the hat is different from other cats!
• Lemma: same stem, part of speech, rough word sense
• cat and cats = same lemma
• Wordform: the full inflected surface form
• cat and cats = different wordforms
How many words?
they lay back on the San Francisco grass and looked at the stars and their
1945 A 25 Aaron
72 AARON 6 Abate
1 Abates
19 ABBESS
5 Abbess
5 ABBOT 6 Abbey
... ... 3 Abbot
.... …
The first step: tokenizing
tr -sc ’A-Za-z’ ’\n’ < shakes.txt | head
THE
SONNETS
by
William
Shakespeare
From
fairest
creatures
We
...
The second step: sorting
tr -sc ’A-Za-z’ ’\n’ < shakes.txt | sort | head
A
A
A
A
A
A
A
A
A
...
More counting
• Merging upper and lower case
tr ‘A-Z’ ‘a-z’ < shakes.txt | tr –sc ‘A-Za-z’ ‘\n’ | sort | uniq –c
フォーチュン500社は情報不足のため時間あた$500K(約6,000万円)
Word tokenization
Basic Text
Processing
61
Viewing morphology in a corpus
Why only strip –ing if there is a vowel?
(*v*)ing ® ø walking ® walk
sing ® sing
tr -sc 'A-Za-z' '\n' < shakes.txt | grep ’ing$' | sort | uniq -c | sort –nr
1312 King 548 being
548 being 541 nothing
541 nothing 152 something
388 king 145 coming
375 bring 130 morning
358 thing 122 having
307 ring 120 living
152 something 117 loving
145 coming 116 Being
130 morning 102 going
tr -sc 'A-Za-z' '\n' < shakes.txt | grep '[aeiou].*ing$' | sort | uniq -c | sort –nr
62
Dealing with complex morphology is
sometimes necessary
• Some languages requires complex morpheme segmentation
• Turkish
• Uygarlastiramadiklarimizdanmissinizcasina
• `(behaving) as if you are among those whom we could not civilize’
• Uygar `civilized’ + las `become’
+ tir `cause’ + ama `not able’
+ dik `past’ + lar ‘plural’
+ imiz ‘p1pl’ + dan ‘abl’
+ mis ‘past’ + siniz ‘2pl’ + casina ‘as if’
Basic Text
Processing
Sentence Segmentation
and Decision Trees
Sentence Segmentation
• !, ? are relatively unambiguous
• Period “.” is quite ambiguous
• Sentence boundary
• Abbreviations like Inc. or Dr.
• Numbers like .02% or 4.3
• Build a binary classifier
• Looks at a “.”
• Decides EndOfSentence/NotEndOfSentence
• Classifiers: hand-written rules, regular expressions, or machine-learning
Determining if a word is end-of-sentence:
a Decision Tree
More sophisticated decision tree features
• Case of word with “.”: Upper, Lower, Cap, Number
• Case of word after “.”: Upper, Lower, Cap, Number
• Numeric features
• Length of word with “.”
• Probability(word with “.” occurs at end-of-s)
• Probability(word after “.” occurs at beginning-of-s)
Implementing Decision Trees
• A decision tree is just an if-then-else statement
• The interesting research is choosing the features
• Setting up the structure is often too hard to do by hand
• Hand-building only possible for very simple features, domains
• For numeric features, it’s too hard to pick each threshold
• Instead, structure usually learned by machine learning from a training
corpus
Decision Trees and other classifiers
• We can think of the questions in a decision tree
• As features that could be exploited by any kind of
classifier
• Logistic regression
• SVM
• Neural Nets
• etc.