0% found this document useful (0 votes)
4 views

2-Boolean IR and Indexing

The document covers the basics of Boolean retrieval and indexing in modern information retrieval systems, highlighting the use of Boolean expressions to form queries and the structure of term-incidence matrices. It discusses the advantages and disadvantages of the Boolean model, including its efficiency and challenges with user query formulation. Additionally, it introduces advanced indexing techniques such as inverted indexes and positional indexes for improved query processing and handling of phrase queries.

Uploaded by

aidin.zaeim
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

2-Boolean IR and Indexing

The document covers the basics of Boolean retrieval and indexing in modern information retrieval systems, highlighting the use of Boolean expressions to form queries and the structure of term-incidence matrices. It discusses the advantages and disadvantages of the Boolean model, including its efficiency and challenges with user query formulation. Additionally, it introduces advanced indexing techniques such as inverted indexes and positional indexes for improved query processing and handling of phrase queries.

Uploaded by

aidin.zaeim
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Boolean retrieval & basics of indexing

CE-324: Modern Information Retrieval


Sharif University of Technology

M. Soleymani
Spring 2024

Most slides have been adapted from: Profs. Manning, Nayak &
Raghavan lectures (CS-276, Stanford)
Boolean retrieval model
} Query: Boolean expressions
} Boolean queries use AND, OR and NOT to join query terms

} Views each doc as a set of words


} Term-incidence matrix is sufficient
} Shows presence or absence of terms in each doc

} Perhaps the simplest model to build an IR system on

2
Sec. 1.3

Boolean queries: Exact match


} In pure Boolean model, retrieved docs are not ranked
} Result is a set of docs.
} It is precise or exact match (docs match condition or not).

} Primary commercial retrieval tool for 3 decades (Until


1990’s).

} Many search systems you still use are Boolean:


} Email, library catalog, Mac OS X Spotlight

3
The classic search model
Get rid of mice in a
Task politically correct way
Misconception?
Info about removing mice
Info Need without killing them
Misformulation?

Query mouse trap

SEARCH Corpus
ENGINE

Query Results
Refinement

4
Sec. 1.1

Example: Plays of Shakespeare


} Which plays of Shakespeare contain the words Brutus
AND Caesar but NOT Calpurnia?
} scanning all of Shakespeare’s plays for Brutus and Caesar, then
strip out those containing Calpurnia?

} The above solution cannot be the answer for large


corpora (computationally expensive)

} Efficiency is also an important issue (along with the


effectiveness)
} Index: data structure built on the text to speed up the searches

5
Sec. 1.1
Example: Plays of Shakespeare
Term-document incidence matrix

Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth

Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0

1 if play contains
word, 0 otherwise

6
Sec. 1.1

Incidence vectors
} So we have a 0/1 vector for each term.
} Brutus AND Caesar but NOT Calpurnia

} To answer query: take the vectors for Brutus, Caesar


and Calpurnia (complemented) è bitwise AND.
} 110100 AND 110111 AND 101111 = 100100.

Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth

Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
7
Sec. 1.1

Answers to query Brutus AND Caesar but NOT Calpurnia

} Antony and Cleopatra, Act III, Scene ii


Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus,
When Antony found Julius Caesar dead,
He cried almost to roaring; and he wept
When at Philippi he found Brutus slain.

} Hamlet, Act III, Scene ii


Lord Polonius: I did enact Julius Caesar I was killed i' the
Capitol; Brutus killed me.

8
Sec. 1.1

Bigger collections
} Number of docs: N = 10!
} Average length of a doc≈ 1000 words
} No. of distinct terms: M = 500,000
} Average length of a word ≈ 6 bytes
} including spaces/punctuation

} 6GB of data

9
Sec. 1.1

Sparsity of Term-document incidence matrix


} 500K x 1M matrix has half-a-trillion 0’s and 1’s.

} But it has no more than one billion 1’s. Why?


} matrix is extremely sparse.
} so a minimum of 99.8% of the cells are zero.

} What’s a better representation?


} We only record the 1 positions.

10
Sec. 1.2

Inverted index
} For each term t, store a list of all docs that contain t.
} Identify each by a docID, a document serial number

} Can we use fixed-size arrays for this?

Brutus 1 2 4 11 31 45 173 174


Caesar 1 2 4 5 6 16 57 132
Calpurnia 2 31 54 101

What happens if the word Caesar


is added to doc 14?
11
Sec. 1.2

Inverted index
} We need variable-size postings lists
} On disk, a continuous run of postings is normal and best
} In memory, can use linked lists or variable length arrays
} Some tradeoffs in size/ease of insertion Posting

Brutus 1 2 4 11 31 45 173 174


Caesar 1 2 4 5 6 16 57 132
Calpurnia 2 31 54 101

Dictionary Postings
Sorted by docID
12
Sec. 1.2

Inverted index construction


Docs to Friends, Romans, countrymen.
be indexed

Tokenizer

Token stream Friends Romans Countrymen


We will see
more on Linguistic modules
these later.
friend roman countryman
Modified tokens
Indexer friend 2 4
roman 1 2
Inverted index
countryman 13 16
13
Sec. 1.2

Indexer steps: Token sequence

} Sequence of (Modified token, Document ID) pairs.

Doc 1 Doc 2

I did enact Julius So let it be with


Caesar I was killed Caesar. The noble
i' the Capitol; Brutus hath told you
Brutus killed me. Caesar was ambitious

14
Sec. 1.2

Indexer steps: Sort

} Sort by terms
} And then docID

Core indexing step

15
Sec. 1.2

Indexer steps: Dictionary & Postings

} Multiple term entries in a


single doc are merged.
} Split into Dictionary and
Postings
} Document frequency
information is added.

Why frequency?
Will discuss later.

16
Sec. 1.2

Where do we pay in storage?

Lists of
docIDs

Terms and
counts

17
Pointers
Sec. 3.1

A naïve dictionary
} An array of struct:

char[20] int Postings *

18
Sec. 1.3

Query processing: AND


} Consider processing the query:
Brutus AND Caesar
} Locate Brutus in the dictionary;
} Retrieve its postings.
} Locate Caesar in the dictionary;
} Retrieve its postings.
} “Merge” (intersect) the two postings:

Brutus 2 4 8 16 32 64 128
Caesar 1 2 3 5 8 13 21 34

19
Sec. 1.3

The merge
} Walk through the two postings simultaneously, in time
linear in the total number of postings entries

Brutus 2 4 8 41 48 64 128
2 8
Caesar 1 2 3 8 11 17 21 31

If list lengths are x and y, merge takes O(x+y) operations.


Crucial: postings sorted by docID.

20
Intersecting two postings lists
(a “merge” algorithm)

21
Sec. 1.3

Boolean queries: More general merges


} Exercise:Adapt the merge for the queries:
Brutus AND NOT Caesar
Brutus OR NOT Caesar

Can we still run through the merge in time 𝑂(𝑥 + 𝑦)?

22
Sec. 1.3

Merging
What about an arbitrary Boolean formula?
(Brutus OR Caesar) AND NOT (Antony OR Cleopatra)

} Can we merge in “linear” time for general Boolean


queries?
} Linear in what?
} Can we do better?

23
Sec. 1.3

Query optimization
} What is the best order for query processing?
} Consider a query that is an AND of 𝑛 terms.
} For each of the 𝑛 terms, get its postings, then AND
them together.

Brutus 2 4 8 16 32 64 128
Caesar 1 2 3 5 8 16 21 34
Calpurnia 13 16

Query: Brutus AND Calpurnia AND Caesar

24
Sec. 1.3

Query optimization example


} Process in order of increasing freq:
} start with smallest set, then keep cutting further.

This is why we kept


document freq. in dictionary

Brutus 2 4 8 16 32 64 128
Caesar 1 2 3 5 8 16 21 34
Calpurnia 13 16
Execute the query as (Calpurnia AND Brutus) AND Caesar.
25
Sec. 1.3

More general optimization


} Example:
(madding OR crowd) AND (ignoble OR strife)

} Get doc frequencies for all terms.


} Estimate the size of each OR by the sum of its
doc. freq.’s (conservative).
} Process in increasing order of OR sizes.

26
Summary of Boolean IR:
Advantages of exact match
} It can be implemented very efficiently

} Predictable, easy to explain


} precise semantics

} Structured queries for pinpointing precise docs


} neat formalism

} Work well when you know exactly (or roughly) what the
collection contains and what you’re looking for

27
Summary of Boolean IR:
Disadvantages of the Boolean Model
} Query formulation (Boolean expression) is difficult for
most users
} Too simplistic Boolean queries by most users
} AND, OR as opposite extremes in a precision/recall tradeoff
} Usually either too few or too many docs in response to a user query

} Retrieval based on binary decision criteria


} No ranking of the docs is provided

} Difficulty increases with collection size

28
Ranking results in advanced IR models
} Boolean queries give inclusion or exclusion of docs.
} Results of queries in Boolean model as a set

} Modern information retrieval systems are no longer


based on the Boolean model

} Often we want to rank/group results


} Need to measure proximity from query to each doc.
} Index term weighting can provide a substantial improvement

29
Phrase and proximity queries:
positional indexes

30
Sec. 2.4

Phrase queries
} Example: “stanford university”
} “I went to university at Stanford” is not a match.

} Easily understood by users


} One of the few “advanced search” ideas that works
} At least 10% of web queries are phrase queries
} Many more queries are implicit phrase queries
} such as person names entered without use of double quotes.

} It is not sufficient to store only the doc IDs in the posting


lists

31
Approaches for phrase queries

} Indexing bi-words (two word phrases)

} Positional indexes
} Full inverted index

32
Sec. 2.4.1

Biword indexes
} Index every consecutive pair of terms in the text as a
phrase
} E.g., doc :“Friends, Romans, Countrymen”
} would generate these biwords:
} “friends romans” ,“romans countrymen”

} Each of these biwords is now a dictionary term

} Two-word phrase query-processing is now immediate.

33
Sec. 2.4.1

Biword indexes: Longer phrase queries


} Longer phrases are processed as conjunction of biwords

Query: “stanford university palo alto”

} can be broken into the Boolean query on biwords:


“stanford university” AND “university palo” AND “palo alto”

} Can have false positives!


} Without the docs, we cannot verify that the docs matching the
above Boolean query do contain the phrase.

34
Sec. 2.4.1

Issues for biword indexes


} False positives (for phrases with more than two words)

} Index blowup due to bigger dictionary


} Infeasible for more than biwords, big even for biwords

} Biword indexes are not the standard solution (for all


biwords) but can be part of a compound strategy

35
Sec. 2.4.2

Positional index
} In the postings, store for each term the position(s) in
which tokens of it appear:

<term, doc freq.;


doc1: position1, position2 … ;
doc2: position1, position2 … ; …>

<be: 993427;
1: 7, 18, 33, 72, 86, 231;
Which of docs 1,2,4,5
2: 3, 149; could contain
4: 17, 191, 291, 430, 434; “to be or not to be”?

5: 363, 367, …>


36
Sec. 2.4.2

Positional index
} For phrase queries, we use a merge algorithm recursively
at the doc level

} We need to deal with more than just equality of docIDs:


} Phrase query: find places where all the words appear in
sequence
} Proximity query: to find places where all the words close
enough

37
Sec. 2.4.2

Processing a phrase query: Example


} Query:“to be or not to be”
} Extract inverted index entries for: to, be, or, not
} Merge: find all positions of “to”, i, i+4, “be”, i+1, i+5, “or”,
i+2, “not”, i+3.
} to:
} <2:1,17,74,222,551>; <4:8,16,190,429,433, 512>; <7:13,23,191>; ...
} be:
} <1:17,19>; <4:17,191,291,430,434>; <5:14,19,101>; ...
} or:
} <3:5,15,19>; <4:5,100,251,431,438>; <7:17,52,121>; ...
} not:
} <4:71,432>; <6:20,85>; ...

38
Sec. 2.4.2

Positional index: Proximity queries


} k word proximity searches
} Find places where the words are within k proximity

} Positional indexes can be used for such queries


} as opposed to biword indexes

} Exercise: Adapt the linear merge of postings to handle


proximity queries. Can you make it work for any value of k?

39
40
Sec. 2.4.2

Positional index: size


} You can compress position values/offsets
} Nevertheless, a positional index expands postings storage
substantially

} Positional index is now standardly used


} because of the power and usefulness of phrase and proximity
queries …
} used explicitly or implicitly in a ranking retrieval system.

41
Sec. 2.4.2

Positional index: size


} Need an entry for each occurrence, not just once per doc

} Index size depends on average doc size


} Average web page has <1000 terms Why?
} SEC filings, books, even some epic poems … easily 100,000 terms

} Consider a term with frequency 0.1%


Expected entries in
Doc size (# of terms) Expected Postings
Positional postings
1000 1 1
100,000 1 100

42
Sec. 2.4.2

Positional index: size (rules of thumb)


} A positional index is usually 2–4 as large as a non-
positional index

} Positional index size 35–50% of volume of original text

} Caveat: all of this holds for “English-like” languages

43
Sec. 2.4.3

Phrase queries: Combination schemes


} Combining two approaches
} For queries whose individual words are rare, it is inefficient to
merge positional postings lists
} Good queries to include in the phrase index:
} common queries based on recent querying behavior.
} and also for phrases whose individual words are common but the
phrase is not such common
¨ Example: “The Who”

44
Phrase queries: Combination schemes
} Williams et al. (2004) evaluate a more sophisticated
mixed indexing scheme
} needs (in average) ¼ of the time of using just a positional index
} needs 26% more space than having a positional index alone

45
Resources
} IIR, Chapter 1
} IIR, Chapter 2

46

You might also like