0% found this document useful (0 votes)
14 views

2.boolean Retrieval Model

Uploaded by

jimmywangiscool
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

2.boolean Retrieval Model

Uploaded by

jimmywangiscool
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

The Boolean Retrieval Model and Extended

Boolean Models
Information Retrieval and Search Engines
Cam-Tu Nguyen, Ph.D
Email: [email protected]
1
Sec. 1.3

Boolean Retrieval Model


• In the Boolean retrieval model, we are able to ask a query that is a
Boolean expression:
• Boolean queries are queries using AND, OR and NOT to join query terms
• Views each document as a set of words
• Is precise: document matches condition or not.
• Perhaps the simplest model to build an IR system on
• Primary commercial retrieval tool for 3 decades.
• Many search systems you still use are Boolean:
• Email, library catalog, macOS Spotlight

2
Sec. 1.1

Searching in Unstructured data


• Which plays of Shakespeare contain
the words Brutus AND Caesar but
NOT Calpurnia?

• One could grep (Unix command)


all of Shakespeare’s plays for Brutus
and Caesar, then strip out lines
containing Calpurnia?

… Why is that not enough?


3
Sec. 1.1

Term-document incidence matrices

Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Shakespeare’s Calpurnia 0 1 0 0 0 0
collected Cleopatra 1 0 0 0 0 0
works mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0

Brutus AND Caesar BUT NOT 1 if play contains


32,000 different terms
Calpurnia
word, 0 otherwise
4
Sec. 1.1

Incidence vectors
• So we have a 0/1 vector for each term.
• To answer query: take the vectors for Brutus, Caesar and Calpurnia
(complemented) è bitwise AND.
• 110100 AND
• 110111 AND Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1

• 101111 = Brutus
Caesar
1
1
1
1
0
0
1
1
0
1
0
1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
100100 mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0

1 0 0 1 0 0
5
Can’t build the matrix
• Consider N = 1 million documents, each with about 1000 words
• Avg 6 bytes/word including spaces/punctuation
• 6GB of data in the documents
• Say there are M = 500K distinct terms among these

500K x 1M matrix has half-a-trillion 0’s and 1’s.

What’s a better representation?


We only record the positions with 1? Why?
The matrix is extremely sparse!!! à The inverted index
6
Sec. 1.2

Inverted index construction (offline)


Documents to Friends, Romans, countrymen.
be indexed

Tokenizer

Token stream Friends Romans Countrymen

Linguistic modules

Modified tokens friend roman countryman

Indexer friend 2 4
roman 1 2
Inverted index
countryman 13 16 7
Questions
• Text Processing includes Tokenization, Normalization, Stemming
and Lemmatization, Removal of Stop Words
• Can you explain each stage of Text Processing?
• In what cases, not indexing stop words may cause harms?

8
Initial stages of text processing
• Tokenization
• Cut character sequence into word tokens
• Special tokens: C++, C#, M*A*S*H (T.V. show name)
• Normalization
• Map text and query term to same form
• You want U.S.A. and USA to match
• Stemming and Lemmatization
• We may wish different forms of a root to match
• authorize, authorization
• Stop words
• We may omit very common words (or not)
• the, a, to, of

9
Sec. 1.2

Inverted index
• For each term t, we must store a list of all documents that contain t.
• We need variable-size postings lists
• On disk, a continuous run of postings is normal and best
• In memory, can use linked lists or variable length arrays
Posting Posting list
• Some tradeoffs in size/ease of insertion

Brutus 1 2 4 11 31 45 173 174


Caesar
1 2 4 5 6 16 57 132
Calpurnia 2 31 54 101

The Postings
Dictionary
Sorted by docID (more later on why). 10
Questions
• How to build the inverted index for a collection of documents?
• How to store the dictionary? How to store the postings?
• How much space that we need for an inverted index?

• How to build the inverted index if the collection is large?

11
Sec. 1.2

Indexer steps: Token sequence


• Sequence of (Modified token, Document ID) pairs.

Doc 1 Doc 2

I did enact Julius So let it be with


Caesar I was killed Caesar. The noble
i’ the Capitol; Brutus hath told you
Brutus killed me. Caesar was ambitious

12
Sec. 1.2

Indexer steps: Sort


• Sort by terms
• At least conceptually
• And then docID

Core indexing step

13
Sec. 1.2

Indexer steps: Dictionary & Postings


• Multiple term entries
in a single document
are merged.
• Split into Dictionary
and Postings
• Doc. frequency
information is added.

Why frequency?
Will discuss later.
14
Sec. 1.2

Where do we pay in storage?


Lists of
docIDs

Terms
and
counts
IR system
implementation
• How do we
index
efficiently?
• How much
storage do we
Pointers need? 15
Sec. 1.3

Questions
• How do we process a query based on the inverted index?

• What kinds of queries can we process?


• Boolean Queries
• Phrase Queries
• Proximity Queries

16
Sec. 1.3

Query processing: AND


• Consider processing the query:
Brutus AND Caesar
• Locate Brutus in the Dictionary;
• Retrieve its postings.
• Locate Caesar in the Dictionary;
• Retrieve its postings.
• “Merge” the two postings (intersect the document sets):

2 4 8 16 32 64 128 Brutus
1 2 3 5 8 13 21 34 Caesar

17
Intersecting two postings lists
(a “merge” algorithm)

Move the pointer pointing


to the smaller Docid

18
Sec. 1.3

The merge
• Walk through the two postings simultaneously, in time linear in the
total number of postings entries

2 4 8 16 32 64 128 Brutus
2 8
1 2 3 5 8 13 21 34 Caesar

If the list lengths are x and y, the merge takes O(x+y)


operations.
Crucial: postings sorted by docID.
19
Sec. 1.3

Boolean queries: More general merges

• Exercise: Adapt the merge for the queries:


(a) Brutus AND NOT Caesar
(b) Brutus OR NOT Caesar

• Can we still run through the merge in time O(x+y)? What can we
achieve?

20
Sec. 1.3

Merging
What about an arbitrary Boolean formula?
(Brutus OR Caesar) AND NOT (Antony OR Cleopatra)
• Can we always merge in “linear” time?
• Linear in what?
• Can we do better?

21
Sec. 1.3

Query optimization
• What is the best order for query processing?
• Consider a query that is an AND of n terms.
• For each of the n terms, get its postings, then AND them together.

Brutus 2 4 8 16 32 64 128
Caesar 1 2 3 5 8 16 21 34
Calpurnia 13 16

Query: Brutus AND Calpurnia AND Caesar

22
Sec. 1.3

Query optimization example


• Process in order of increasing freq:
• start with smallest set, then keep cutting further.

This is why we kept


document freq. in dictionary

Brutus 2 4 8 16 32 64 128
Caesar 1 2 3 5 8 16 21 34
Calpurnia 13 16

Execute the query as (Calpurnia AND Brutus) AND Caesar.


23
Exercise
• Recommend a query processing order
for

(tangerine OR trees) AND


Term Freq
(marmalade OR skies) AND eyes 213312
(kaleidoscope OR eyes) kaleidoscope 87009
marmalade 107913
• Which two terms should we process skies 271658
first? tangerine 46653
trees 316812

24
Sec. 1.3

More general optimization


• e.g., (madding OR crowd) AND (ignoble OR strife)
• Get doc. freq.’s for all terms.
• Estimate the size of each OR by the sum of its doc. freq.’s
(conservative).
• Process in increasing order of OR sizes.

25
Sec. 2.4

Phrase queries
• We want to be able to answer queries such as “stanford university” –
as a phrase
• Thus the sentence “I went to university at Stanford” is not a match.
• The concept of phrase queries has proven easily understood by users; one of
the few “advanced search” ideas that works
• Many more queries are implicit phrase queries
• For this, it no longer suffices to store only
<term : docs> entries

26
Sec. 2.4.1

A first attempt: Biword indexes


• Index every consecutive pair of terms in the text as a phrase
• For example the text “Friends, Romans, Countrymen” would generate
the biwords
• friends romans
• romans countrymen
• Each of these biwords is now a dictionary term
• Two-word phrase query-processing is now immediate.

27
Sec. 2.4.1

Longer phrase queries


• Longer phrases can be processed by breaking them down
• stanford university palo alto can be broken into the Boolean query
on biwords:
stanford university AND university palo AND palo alto

Without the docs, we cannot verify that the docs matching the above
Boolean query do contain the phrase.

Can have false positives!

28
Sec. 2.4.1

Extended biwords
• Parse the indexed text and perform part-of-speech-tagging (POST).
• Bucket the terms into (say) Nouns (N) and articles/prepositions (X).
• Call any string of terms of the form NX*N an extended biword.
• Each such extended biword is now made a term in the dictionary.
• Example: catcher in the rye
N X X N
• Query processing: parse it into N’s and X’s
• Segment query into enhanced biwords
• Look up in index: catcher rye

29
Sec. 2.4.1

Issues for biword indexes


• False positives, as noted before
• Index blowup due to bigger dictionary
• Infeasible for more than biwords, big even for them

• Biword indexes are not the standard solution (for all biwords) but can
be part of a compound strategy

30
Sec. 2.4.2

Solution 2: Positional indexes


• In the postings, store, for each term the position(s) in which tokens of it
appear:

document frequency

docids

position list (where the term


appears in each document)

term frequency (in the doc)

31
Sec. 2.4.2

Positional index example


<be: 993427;
1: 7, 18, 33, 72, 86, 231;
Which of docs 1,2,4,5
2: 3, 149; could contain “to be
4: 17, 191, 291, 430, 434; or not to be”?

5: 363, 367, …>

• For phrase queries, we use a merge algorithm


recursively at the document level
• But we now need to deal with more than just
equality
32
Sec. 2.4.2

Processing a phrase query


• Extract inverted index entries for each distinct term: to, be, or, not.
• Merge their doc:position lists to enumerate all positions with “to be
or not to be”.
• to:
• 2:1,17,74,222,551; 4:8,16,190,429,433; 7:13,23,191; ...
• be:
• 1:17,19; 4:17,191,291,430,434; 5:14,19,101; ...
• Same general method for proximity searches

33
Sec. 2.4.2

Proximity queries
• employment /3 place
• Again, here, /k means “within k words of (of either sides)”.
• Clearly, positional indexes can be used for such queries; biword
indexes cannot.
• Exercise: Adapt the linear merge of postings to handle proximity
queries. Can you make it work for any value of k?
• This is a little tricky to do correctly and efficiently
• See Figure 2.12 of IIR

34
Proximity
queries

35
Sec. 2.4.2

Positional index size


• A positional index expands postings storage substantially
• Even though indices can be compressed
• Nevertheless, a positional index is now standardly used because of
the power and usefulness of phrase and proximity queries … whether
used explicitly or implicitly in a ranking retrieval system.

37
Sec. 2.4.2

Positional index size


• Need an entry for each occurrence, not just once per document
• Index size depends on average document size
• Average web page has <1000 terms
• SEC filings, books, even some epic poems … easily 100,000 terms
• Consider a term with frequency 0.1%

Document size Postings Positional postings


1000 1 1
100,000 1 100

38
Sec. 2.4.2

Rules of thumb
• A positional index is 2–4 as large as a non-positional index

• Positional index size 35–50% of volume of original text

• Caveat: all of this holds for “English-like” languages

39
Sec. 2.4.3

Combination schemes
• These two approaches can be profitably combined
• For particular phrases (“Michael Jackson”, “Britney Spears”) it is inefficient to
keep on merging positional postings lists
• Even more so for phrases like “The Who”
• Williams et al. (2004) evaluate a more sophisticated mixed indexing
scheme
• A typical web query mixture was executed in ¼ of the time of using just a
positional index
• It required 26% more space than having a positional index alone

40
Read More
• Chapter 1,2, IIR
• Chapter 1, SE

Acknowledgements
Many slides in this section are adapted from the slides of Prof
Christopher Manning (Standford)

41

You might also like