0% found this document useful (0 votes)
7 views

Lecture 27

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Lecture 27

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

(3) Computing Moments

Generalization: Moments
 Suppose a stream has elements chosen
from a set A of N values
 Let mi be the number of times value i occurs
in the stream T=0, T=10
i=1, 2,3,4,5 | m1, m2, m3, m4, m5
Kth moment= (m1)^1+(m2)^1+(m3)^1+
 The kth moment is (m4)^1+(m5)^1

 iA
( mi ) k

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 2


Special Cases

 iA
( mi ) k

 0thmoment = number of distinct elements


 The problem just considered
 1st moment = count of the numbers of
elements = length of the stream
 Easy to compute
 2nd moment = surprise number S =
a measure of how uneven the distribution is

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 3


Example: Surprise Number
 Stream of length 100
 11 distinct values

 Str1: Item counts: 10, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9


Surprise S = 910, 0th m=11, 1st m=100, 2ndm=910
 Str2: Item counts: 90, 1, 1, 1, 1, 1, 1, 1 ,1, 1, 1
Surprise S = 8,110, 0th m=11, 1st m=100,
2ndm=8,110

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 4


[Alon, Matias, and Szegedy]

AMS Method
 AMS method works for all moments
 Gives an unbiased estimate
 We will just concentrate on the 2nd moment S
 We pick and keep track of many variables X:
 For each variable X we store X.el and X.val
 X.el corresponds to the item i
 X.val corresponds to the count of item i
 Note this requires a count in main memory,
so number of Xs is limited
 Our goal is to compute

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 5


One Random Variable (X)
 How to set X.val and X.el?
 Assume stream has length n (we relax this later)
 Pick some random time t (t<n) to start,
so that any time is equally likely
 Let at time t the stream have item i. We set X.el = i
 Then we maintain count c (X.val = c) of the number
of is in the stream starting from the chosen time t
 Then the estimate of the 2nd moment () is:

 Note, we will keep track of multiple Xs, (X1, X2,… Xk)


and our final estimate will be
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 6
Expectation Analysis
Count: 1 2 3 ma

Stream: a a b b b a b a

 2nd moment is
 ct … number of times item at time t appears
from time t onwards (c1=ma , c2=ma-1, c3=mb)
mi … total count of
item i in the stream
(we are assuming
stream has length n)

Time t when Time t when


Time t when the penultimate the first i is
Group times
the last i is i is seen (ct=2) seen (ct=mi)
by the value
seen (ct=1)
seen
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 7
Expectation Analysis
Count: 1 2 3 ma

Stream: a a b b b a b a

 Little side calculation:


 Then

 So,
 We have the second moment (in expectation)!

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 8


Higher-Order Moments
 For estimating kth moment we essentially use
the same algorithm but change the estimate:
 For k=2 we used n (2·c – 1)
 For k=3 we use: n (3·c2 – 3c + 1) (where c=X.val)
 Why?
 For k=2: Remember we had and we showed terms
2c-1 (for c=1,…,m) sum to m2

 So:
 For k=3: c3 - (c-1)3 = 3c2 - 3c + 1
 Generally: Estimate
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 9
Combining Samples
 In practice:
 Compute for
as many variables X as you can fit in memory
 Average them in groups
 Take median of averages

 Problem: Streams never end


 We assumed there was a number n,
the number of positions in the stream
 But real streams go on forever, so n is
a variable – the number of inputs seen so far
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 10
Streams Never End: Fixups
 (1) The variables X have n as a factor –
keep n separately; just hold the count in X
 (2) Suppose we can only store k counts.
We must throw some Xs out as time goes on:
 Objective: Each starting time t is selected with
probability k/n
 Solution: (fixed-size sampling!)
 Choose the first k times for k variables
 When the nth element arrives (n > k), choose it with
probability k/n
 If you choose it, throw one of the previously stored
variables X out, with equal probability
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 11
Counting Itemsets
Counting Itemsets
 New Problem: Given a stream, which items
appear more than s times in the window?
 Possible solution: Think of the stream of
baskets as one binary stream per item
 1 = item present; 0 = not present
 Use DGIM to estimate counts of 1s for all items
6 10
4
3 2
1 2
1 0
010011100010100100010110110111001010110011010
N
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 13
Extensions
 In principle, you could count frequent pairs
or even larger sets the same way
 One stream per itemset

 Drawbacks:
 Only approximate
 Number of itemsets is way too big

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 14


Exponentially Decaying Windows
 Exponentially decaying windows: A heuristic
for selecting likely frequent item(sets)
 What are “currently” most popular movies?
 Instead of computing the raw count in last N elements
 Compute a smooth aggregation over the whole stream
 If stream is a1, a2,… and we are taking the sum
of the stream, take the answer at time t to be:
 c is a constant, presumably tiny, like 10-6 or 10-9
 When new at+1 arrives:
Multiply current sum by (1-c) and add at+1
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 15
Example: Counting Items
 If each ai is an “item” we can compute the
characteristic function of each possible
item x as an Exponentially Decaying Window
 That is:
where δi=1 if ai=x, and 0 otherwise
 Imagine that for each item x we have a binary
stream (1 if x appears, 0 if x does not appear)
 New item x arrives:
 Multiply all counts by (1-c)
 Add +1 to count for element x
 Call this sum the “weight” of item x
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 16
Sliding Versus Decaying Windows

...

1/c
 Important property: Sum over all weights is
1/[1 – (1 – c)] = 1/c

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 17


Example: Counting Items
 What are “currently” most popular movies?
 Suppose we want to find movies of weight > ½
 Important property: Sum over all weights is 1/[1 –
(1 – c)] = 1/c
 Thus:
 There cannot be more than 2/c movies with
weight of ½ or more
 So, 2/c is a limit on the number of
movies being counted at any time

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 18


Extension to Itemsets
 Count (some) itemsets in an E.D.W.
 What are currently “hot” itemsets?
 Problem: Too many itemsets to keep counts of
all of them in memory
 When a basket B comes in:
 Multiply all counts by (1-c)
 For uncounted items in B, create new count
 Add 1 to count of any item in B and to any itemset
contained in B that is already being counted
 Drop counts < ½
 Initiate new counts (next slide)
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 19
Initiation of New Counts
 Start a count for an itemset S ⊆ B if every
proper subset of S had a count prior to arrival
of basket B
 Intuitively: If all subsets of S are being counted
this means they are “frequent/hot” and thus S has
a potential to be “hot”
 Example:
 Start counting S={i, j} iff both i and j were counted
prior to seeing B
 Start counting S={i, j, k} iff {i, j}, {i, k}, and {j, k}
were all counted prior to seeing B
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 20
How many counts do we need?
 Counts for single items < (2/c)∙(avg. number
of items in a basket)
 Counts for larger itemsets = ??

 But we are conservative about starting


counts of large sets
 If we counted every set we saw, one basket
of 20 items would initiate 1M counts

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org 21

You might also like