0% found this document useful (0 votes)
11 views

IR Slides

This document summarizes key concepts related to dictionaries and tolerant retrieval in information retrieval systems. It discusses how dictionaries are stored using arrays and postings lists. It describes different data structures used for dictionaries, including hash tables and trees like B-trees. It also covers techniques for handling wildcard queries like permuterm indexes and k-gram indexes. Finally, it discusses spelling correction methods for correcting documents and queries.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

IR Slides

This document summarizes key concepts related to dictionaries and tolerant retrieval in information retrieval systems. It discusses how dictionaries are stored using arrays and postings lists. It describes different data structures used for dictionaries, including hash tables and trees like B-trees. It also covers techniques for handling wildcard queries like permuterm indexes and k-gram indexes. Finally, it discusses spelling correction methods for correcting documents and queries.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Information Retrieval

and Organisation

Dell Zhang
Birkbeck, University of London
IR Chapter 03

Dictionaries and
Tolerant Retrieval
Dictionaries
I Dictionary: the data structure for storing the
term vocabulary
Brutus −→ 1 2 4 11 31 45 173 174

Caesar −→ 1 2 4 5 6 16 57 132 ...

Calpurnia −→ 2 31 54 101

..
.
| {z } | {z }
dictionary postings
Storing Dictionaries
I For each term, we need to store a couple of
items:
I document frequency
I pointer to postings list
I ...
I Assume for the time being that
I we can store this information in a fixed-length entry
I we store these entries in an array
Storing Dictionaries
term document pointer to
frequency postings list
a 656,265 −→
aachen 65 −→
... ... ...
zulu 221 −→

space needed: 20 bytes 4 bytes 4 bytes

I How do we look up an element in this array at


query time?
I Remember: these dictionaries can be huge,
scanning is not an option
Data Structures
I Two main classes of data structures:
hash tables and trees
I Some IR systems use hash tables, some use trees.
I Criteria for when to use hash tables vs trees:
I Is there a fixed number of terms or will it keep
growing?
I What are the relative frequencies with which
various keys will be accessed?
I How many terms are we likely to have?
Hash Tables
I Each vocabulary term is hashed into an integer.
I Try to avoid collisions
I At query time, do the following:
I hash query term
I resolve collisions
I locate entry in fixed-width array
I Pros:
I Lookup in a hash table is faster than in a tree.
I Cons:
I no prefix search (all terms starting with automat)
I need to rehash everything periodically if vocabulary
keeps growing
Trees
I Trees solve the prefix problem (find all terms
starting with automat).
I Simplest tree: binary tree.
I However, binary trees are problematic:
I Only balanced trees allow efficient retrieval
I Rebalancing binary trees is expensive
I Use B-trees (the index structure that you know
from database lectures)
B-Tree

Taken from documentation for Oracle 10g


Wildcard Queries
I mon*: find all docs containing any term
beginning with mon
I Easy with B-tree dictionary
I retrieve all terms t in the range: mon ≤ t < moo
I *mon: find all docs containing any term ending
with mon
I Maintain an additional tree for terms backwards,
then
I retrieve all terms t in the range: nom ≤ t < non
Query Processing
I At this point, we have an enumeration of all
terms in the dictionary that match the wildcard
query.
I We still have to look up the postings for each
enumerated term.
I e.g., consider the query: gen* AND universit*
I This may result in the execution of many
Boolean AND queries.
Wildcards in Middle of Term
I Example: m*nchen
I We could look up m* and *nchen in the B-tree
and intersect the two term sets.
I Expensive (there are probably thousands and
thousands of terms beginning with “m”)
I Alternative: permuterm index
I Basic idea: Rotate every wildcard query, so that
the * occurs at the end.
Permuterm Index
I For term hello: add
hello$, ello$h, llo$he, lo$hel, o$hell, and $hello
to the B-tree where $ is a special symbol
Permuterm Index
I Queries
I For X, look up X$
I For X*, look up $X*
I For *X, look up X$*
I For *X*, look up X*
I For X*Y, look up Y$X*
I Example:
I For hel*o, look up o$hel*
I It’s really a tree and should be called
permuterm tree
I But permuterm index is more common name.
Query Processing
I Once we modified the query (as shown on last
slide), we can do a regular lookup on a B-tree
I This is much faster than looking up X* and *Y
and combining results (for query X*Y)
I Permuterm index also handles leading
wildcards: *X
I It has a disadvantage, though: quadruples the
size of the dictionary compared to a regular
B-tree (as every term is stored multiple times)
k-gram Index
I More space-efficient than permuterm index
I Enumerate all character k-grams (sequence of
k characters) occurring in a term
I 2-grams are also called bigrams
I 3-grams are also called trigrams
I Example:
I from April is the cruelest month
we get the bigrams:
$a ap pr ri il l$ $i is s$ $t th he e$ $c cr ru ue el le
es st t$ $m mo on nt th h$
I $ is a special word boundary symbol.
I Maintain an inverted index from bigrams to the
terms that contain the bigram
Postings List in a 3-gram Index

etr - beetroot - metric - petrify - retrieval

I Note that we now have two different types of


inverted indexes
I The term-document inverted index for finding
documents based on a query consisting of terms
I The k-gram index for finding terms based on a
query consisting of k-grams
Processing Wildcard Queries
I Query mon* can now be run as:
$m AND mo AND on
I Gets us all terms with the prefix mon . . .
I . . . but also many “false positives” like moon
I We must post-filter these terms against query
I Surviving terms are then looked up in the
term-document inverted index.
I k-gram indexes are fast and space efficient
(compared to permuterm indexes).
Processing Wildcard Queries
I We must potentially execute a large number of
Boolean queries for each enumerated, filtered
term (on the term-document index)
I Recall the query: gen* AND universit*
I Most straightforward semantics: Conjunction of
disjunctions
I Very expensive
I Users hate to type
I If abbreviated queries like pyth* theo* for
pythagoras’ theorem are legal, users will use
them . . .
I . . . a lot
Spelling Correction
I Two principal uses
I Correcting documents being indexed
I Correcting user queries
I Two different methods
I Isolated Word Spelling Correction
I Check each word on its own for misspelling
I Will not catch typos resulting in correctly spelled
words, e.g., an asteroid that fell form the sky
I Context-Sensitive Spelling Correction
I Look at surrounding words
I Can correct the form/from error above
Correcting Documents
I We’re not interested in interactive spelling
correction of documents (e.g., MS Word) in
this class.
I In IR, we use document correction primarily for
OCR’ed documents (i.e. documents digitized
via Optical Character Recognition)
I The general philosophy in IR is: don’t change
the documents.
Correcting Queries
I First: isolated word spelling correction
I Fundamental premise 1: There is a list of “correct
words” from which the correct spellings come.
I Fundamental premise 2: We have a way of
computing the distance between a misspelled word
and a correct word.
I Simple spelling correction algorithm:
return the “correct” word that has the smallest
distance to the misspelled word.
I Example: informaton → information
Correcting Queries
I Can we use the term vocabulary of the inverted
index as the list of correct words?
I It can be very biased
I It may be missing certain terms
I Alternatives:
I A standard dictionary
(Webster’s, Encyclopædia Britannica, etc.)
I An industry-specific dictionary
(for specialized IR systems)
I The term vocabulary of the collection,
appropriately weighted
Computing Distance
I How can we compute the distance between
words?
I We’ll look at some alternatives:
I edit distance (Levenshtein distance)
I weighted edit distance
I k-gram overlap
Edit Distance
I The (minimum) edit distance between two
strings s1 and s2 is the minimum number of
basic operations to convert s1 to s2 .
I Levenshtein distance: the admissible basic
operations are: insert, delete, and replace
I Levenshtein distance dog→do: 1 (deletion)
I Levenshtein distance cat→cart: 1 (insertion)
I Levenshtein distance cat→cut: 1 (replacement)
I Levenshtein distance cat→act: 2
(2 replacements or 1 insertion and 1 deletion)
Computing Distance
I Getting from cats to fast
“” f a s t
“” “” → “” “” → f “” → fa “” → fas “” → fast
c c → “” c→f c → fa c → fas c → fast
a ca → “” ca → f ca → fa ca → fas ca → fast
t cat → “” cat → f cat → fa cat → fas cat → fast
s cats → “” cats → f cats → fa cats → fas cats → fast

I Each cell will contain


the (cheapest) cost of getting
from the string on the left-hand side
to the string on the right-hand side
Computing Distance
I We know the costs for
the uppermost row and the leftmost column:
I we have to get from “” to fast by inserting
characters
I we have to get from cats to “” by deleting
characters
“” f a s t
“” 0 1 2 3 4
c 1
a 2
t 3
s 4
Computing Distance

I For other cells, take the minimum of costs


I Coming from (a):
I add 1 to cost in (a) — insertion
I Coming from (b):
I add 1 to cost in (b) — deletion
I Coming from (c):
I if characters in row and column are equal,
copy cost from (c)
I otherwise,
add 1 to cost in (c) — replacement
Resulting Matrix
I Computing the costs for all cells results in the
following matrix:
“” f a s t
“” 0 1 2 3 4
c 1 1 2 3 4
a 2 2 1 2 3
t 3 3 2 2 2
s 4 4 3 2 3

I So the Levenshtein distance is 3


Algorithm
Weighted Edit Distance
I As Levenshtein distance, but weight of an
operation depends on the characters involved.
I Meant to capture keyboard errors
I e.g., m more likely to be mistyped as n than as q.
I therefore, replacing m by n is a smaller edit
distance than by q.
I We now require a weight matrix as input.
I Modify dynamic programming to handle
weights.
Using Edit Distances
I Comparing query term q to all terms in the
vocabulary is too expensive
I Solution: use heuristics to determine subset
I Only compare to terms beginning with the same
letter (doesn’t work for typos at beginning)
I Generate set of rotations for q and use a
permuterm index (doesn’t work well for
replacements)
I For each rotation, omit a suffix of l characters
before doing lookup in permuterm index
I Ensures that each term in query rotation shares a
substring with retrieved terms
I The value of l could be fixed to a constant length
(e.g. 2), or depend on the length of q
Using a k-gram Index
I Enumerate all k-grams in the query term
I Use the k-gram index to retrieve “correct”
words that match query term k-grams
I Threshold by number of matching k-grams
I e.g., only vocabulary terms that differ by at most 3
k-grams
Example with 2-grams
I Suppose the misspelled word is “bordroom”:
$b, bo, or, rd, dr, ro, oo, om, m$

bo - aboard - about -boardroom - border

or - border - lord - morbid - sordid

rd - aboard - ardent -boardroom - border


Example with 3-grams
I Suppose the correct word is “november”:
$$n, $no, nov, ove, vem, emb, mbe, ber, er$, r$$
I And the query term is “december”:
$$d, $de, dec, ece, cem, emb, mbe, ber, er$, r$$
I So 5 trigrams overlap (out of 10 in each term)
I Issue: Fixed number of k-grams that differ
does not work for words of differing length.
I How can we turn this into a normalized
measure of overlap?
Jaccard Coefficient
I A commonly used measure of two sets’ overlap
I Let A and B be two sets
I Jaccard coefficient:
|A ∩ B|
|A ∪ B|

I A and B don’t have to be the same size.


I Always assigns a number between 0 and 1.
I Application to spelling correction: declare a
match if the coefficient is, say, > 0.8.
Context-Sensitive Correction
I Our example was:
“an asteroid that fell form the sky”
I How can we correct form here?
I One idea: hit-based spelling correction
I We’ll return back to this idea when we talk about
the probabilistic approach to spelling correction, in
the second half of the module.
Context-Sensitive Correction
I Given query “flew form munich”
I Retrieve the correct terms close to each query
term
I flea for flew
I from for form
I munch for munich
I Now try all possible resulting phrases as
queries, with one word fixed at a time
I Try query “flea form munich”
I Try query “flew from munich”
I Try query “flew form munch”
I The correct query “flew from munich”
should have the most hits.
Context-Sensitive Correction
I The hit-based algorithm we just outlined is not
very efficient.
I Suppose we have 7 alternatives for flew, 19 for
form and 3 for munich
I Then we have to test 7 × 19 × 3 different variants
I More efficient alternative: look at the
collection of queries, not documents
I This assumes that we log queries
General Issues
I User interface
I Automatic or suggested correction
I “Did you mean” only works for one suggestion.
I What about multiple possible corrections?
I Tradeoff: simple vs powerful UI
I Cost
I Spelling correction is potentially expensive.
I Avoid running on every query?
I Maybe just on queries that match few documents.
Phonetic Matching
I Soundex is the basis for finding phonetic (as
opposed to orthographic) alternatives.
I e.g., Chebyshev / Tchebyscheff
I Algorithm:
I Turn every token to be indexed into a 4-character
reduced form
I Do the same with query terms
I Build and search an index on the reduced forms
Soundex Algorithm
1. Retain the first letter of the term.
2. Change all occurrences of the following letters to 0 (zero):
I A, E, I, O, U, H, W, Y

3. Change letters to digits as follows:


I B, F, P, V ⇒ 1
I C, G, J, K, Q, S, X, Z ⇒ 2
I D,T ⇒ 3
I L⇒4
I M, N ⇒ 5
I R⇒6
4. Repeatedly remove one out of each pair of consecutive identical
digits
5. Remove all 0s from the resulting string; pad the resulting string
with trailing 0s, and return the first four positions, which will
consist of a letter followed by three digits
Soundex Algorithm
I Example
difficulty difference
steps 1 and 2 d0ff0c0lt0 d0ff0r0nc0
step 3 d011020430 d011060520
step 4 d01020430 d01060520
step 5 d124 d165

I Vowels are viewed as being interchangeable


I Consonants with similar sounds (e.g. D and T)
are put in equivalence classes
I Works fairly well for European languages
Summary
I How to organize a dictionary of an inverted
index
I How to do imprecise searches on this dictionary
handling
I wildcards
I spelling mistakes

You might also like