Concepts and Techniques: Data Mining
Concepts and Techniques: Data Mining
— Chapter 6 —
Basic Concepts
Evaluation Methods
Summary
3
What Is Frequent Pattern Analysis?
Frequent pattern: a pattern (a set of items, subsequences, substructures,
etc.) that occurs frequently in a data set
First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context of
frequent itemsets and association rule mining
Motivation: Finding inherent regularities in data
What products were often purchased together?— Beer and diapers?!
What are the subsequent purchases after buying a PC?
What kinds of DNA are sensitive to this new drug?
Can we automatically classify web documents?
Applications
Basket data analysis, cross-marketing, catalog design, sale campaign
analysis, Web log (click stream) analysis, and DNA sequence analysis.
4
Why Is Freq. Pattern Mining Important?
Freq. pattern: An intrinsic and important property of
datasets
Foundation for many essential data mining tasks
Association, correlation, and causality analysis
Broad applications
5
Basic Concepts: Frequent Patterns
6
Basic Concepts: Association Rules
Tid Items bought Find all the rules X Y with
10 Beer, Nuts, Diaper
20 Beer, Coffee, Diaper
minimum support and confidence
30 Beer, Diaper, Eggs support, s, probability that a
40 Nuts, Eggs, Milk transaction contains X Y
50 Nuts, Coffee, Diaper, Eggs, Milk
confidence, c, conditional
Customer
buys both
Customer probability that a transaction
buys
diaper
having X also contains Y
Let minsup = 50%, minconf = 50%
Freq. Pat.: Beer:3, Nuts:3, Diaper:4, Eggs:3,
Customer {Beer, Diaper}:3
buys beer Association rules: (many more!)
Beer Diaper (60%, 100%)
Diaper Beer (60%, 75%)
7
Closed Patterns and Max-Patterns
A long pattern contains a combinatorial number of sub-
patterns, e.g., {a1, …, a100} contains (1001) + (1002) + … +
(110000) = 2100 – 1 = 1.27*1030 sub-patterns!
Solution: Mine closed patterns and max-patterns instead
An itemset X is closed if X is frequent and there exists no
super-pattern Y כX, with the same support as X (proposed
by Pasquier, et al. @ ICDT’99)
An itemset X is a max-pattern if X is frequent and there
exists no frequent super-pattern Y כX (proposed by
Bayardo @ SIGMOD’98)
Closed pattern is a lossless compression of freq. patterns
Reducing the # of patterns and rules
8
Closed Patterns and Max-Patterns
Exercise: Suppose a DB contains only two transactions
<a1, …, a100>, <a1, …, a50>
Let min_sup = 1
What is the set of closed itemset?
{a1, …, a100}: 1
{a1, …, a50}: 2
What is the set of max-pattern?
{a1, …, a100}: 1
What is the set of all patterns?
{a1}: 2, …, {a1, a2}: 2, …, {a1, a51}: 1, …, {a1, a2, …, a100}: 1
A big number: 2100 - 1? Why?
9
Chapter 5: Mining Frequent Patterns, Association and
Correlations: Basic Concepts and Methods
Basic Concepts
Evaluation Methods
Summary
10
Scalable Frequent Itemset Mining Methods
Approach
Data Format
11
The Downward Closure Property and Scalable
Mining Methods
The downward closure property of frequent patterns
Any subset of a frequent itemset must be frequent
diaper}
i.e., every transaction having {beer, diaper, nuts} also
@SIGMOD’00)
Vertical data format approach (Charm—Zaki & Hsiao
@SDM’02)
12
Apriori: A Candidate Generation & Test Approach
C3 Itemset
3rd scan L3 Itemset sup
{B, C, E} {B, C, E} 2
14
The Apriori Algorithm (Pseudo-Code)
Ck: Candidate itemset of size k
Lk : frequent itemset of size k
L1 = {frequent items};
for (k = 1; Lk !=; k++) do begin
Ck+1 = candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1 that are
contained in t
Lk+1 = candidates in Ck+1 with min_support
end
return k Lk;
15
Implementation of Apriori
How to generate candidates?
Step 1: self-joining Lk
Step 2: pruning
Example of Candidate-generation
L3={abc, abd, acd, ace, bcd}
Self-joining: L3*L3
abcd from abc and abd
acde from acd and ace
Pruning:
acde is removed because ade is not in L3
C4 = {abcd}
16
Candidate Generation: An SQL Implementation
SQL Implementation of candidate generation
Suppose the items in Lk-1 are listed in an order
Step 1: self-joining Lk-1
insert into Ck
select p.item1, p.item2, …, p.itemk-1, q.itemk-1
from Lk-1 p, Lk-1 q
where p.item1=q.item1, …, p.itemk-2=q.itemk-2, p.itemk-1 < q.itemk-1
Step 2: pruning
forall itemsets c in Ck do
forall (k-1)-subsets s of c do
if (s is not in Lk-1) then delete c from Ck
Use object-relational extensions like UDFs, BLOBs, and Table functions for
efficient implementation [S. Sarawagi, S. Thomas, and R. Agrawal. Integrating
association rule mining with relational database systems: Alternatives and
implications. SIGMOD’98]
19
Scalable Frequent Itemset Mining Methods
20
Further Improvement of the Apriori Method
21
Partition: Scan Database Only Twice
Any itemset that is potentially frequent in DB must be
frequent in at least one of the partitions of DB
Scan 1: partition database and find local frequent
patterns
Scan 2: consolidate global frequent patterns
24
DIC: Reduce Number of Scans
ABCD
Once both A and D are determined
frequent, the counting of AD begins
ABC ABD ACD BCD Once all length-2 subsets of BCD are
determined frequent, the counting of BCD
begins
AB AC BC AD BD CD
Transactions
1-itemsets
A B C D
Apriori 2-itemsets
…
{}
Itemset lattice 1-itemsets
S. Brin R. Motwani, J. Ullman, 2-items
and S. Tsur. Dynamic itemset DIC 3-items
counting and implication rules for
market basket data. In
SIGMOD’97
25
Scalable Frequent Itemset Mining Methods
26
Pattern-Growth Approach: Mining Frequent Patterns
Without Candidate Generation
Bottlenecks of the Apriori approach
Breadth-first (i.e., level-wise) search
Candidate generation and test
Often generates a huge number of candidates
The FPGrowth Approach (J. Han, J. Pei, and Y. Yin, SIGMOD’ 00)
Depth-first search
Avoid explicit candidate generation
Major philosophy: Grow long patterns from short ones using local
frequent items only
“abc” is a frequent pattern
Get all transactions having “abc”, i.e., project DB on abc: DB|abc
“d” is a local frequent item in DB|abc abcd is a frequent pattern
27
Construct FP-tree from a Transaction Database
Patterns containing p
…
Pattern f
29
Find Patterns Having P From P-conditional Database
{}
Header Table
f:4 c:1 Conditional pattern bases
Item frequency head
f 4 item cond. pattern base
c 4 c:3 b:1 b:1
c f:3
a 3
b 3 a:3 p:1 a fc:3
m 3 b fca:1, f:1, c:1
p 3 m:2 b:1 m fca:2, fcab:1
p:2 m:1 p fcam:2, cb:1
30
From Conditional Pattern-bases to Conditional FP-trees
pattern base
c:3
f:3
am-conditional FP-tree
c:3 {}
Cond. pattern base of “cm”: (f:3)
a:3 f:3
m-conditional FP-tree
cm-conditional FP-tree
{}
Cond. pattern base of “cam”: (f:3) f:3
cam-conditional FP-tree
32
A Special Case: Single Prefix Path in FP-tree
a3:n3
{} r1
C2:k2 C3:k3
a3:n3 C2:k2 C3:k3
33
Benefits of the FP-tree Structure
Completeness
Preserve complete information for frequent pattern
mining
Never break a long pattern of any transaction
Compactness
Reduce irrelevant info—infrequent items are gone
Items in frequency descending order: the more
frequently occurring, the more likely to be shared
Never be larger than the original database (not count
node-links and the count field)
34
The Frequent Pattern Growth Mining Method
Idea: Frequent pattern growth
Recursively grow frequent patterns by pattern and
database partition
Method
For each frequent item, construct its conditional
FP-tree
Until the resulting FP-tree is empty, or it contains only
35
Scaling FP-growth by Database Projection
What about if FP-tree cannot fit in memory?
DB projection
First partition a database into a set of projected DBs
Then construct and mine FP-tree for each projected DB
Parallel projection vs. partition projection techniques
Parallel projection
Project the DB in parallel for each frequent item
Parallel projection is space costly
All the partitions can be processed in parallel
Partition projection
Partition the DB based on the ordered frequent items
Passing the unprocessed parts to the subsequent partitions
36
Partition-Based Projection
am-proj DB cm-proj DB
fc f …
fc f
fc f
37
FP-Growth vs. Apriori: Scalability With the Support
Threshold
70
60
50
40
30
20
10
0
0 0.5 1 1.5 2 2.5 3
Support threshold(%)
38
FP-Growth vs. Tree-Projection: Scalability with the
Support Threshold
100
Runtime (sec.)
80
60
40
20
0
0 0.5 1 1.5 2
Support threshold (%)
Data Mining: Concepts and Techniques 39
Advantages of the Pattern Growth Approach
Divide-and-conquer:
Decompose both the mining task and DB according to the
frequent patterns obtained so far
Lead to focused search of smaller databases
Other factors
No candidate generation, no candidate test
Compressed database: FP-tree structure
No repeated scan of entire database
Basic ops: counting local freq items and building sub FP-tree, no
pattern search and matching
A good open-source implementation and refinement of FPGrowth
FPGrowth+ (Grahne and J. Zhu, FIMI'03)
40
Further Improvements of Mining Methods
41
Extension of Pattern Growth Mining Methodology
Pattern-growth-based Clustering
MaPle (Pei, et al., ICDM’03)
Pattern-Growth-Based Classification
Mining frequent and discriminative patterns (Cheng, et al, ICDE’07)
42
Scalable Frequent Itemset Mining Methods
43
ECLAT: Mining by Exploring Vertical Data Format
Vertical format: t(AB) = {T11, T25, …}
tid-list: list of trans.-ids containing an itemset
Deriving frequent patterns based on vertical intersections
t(X) = t(Y): X and Y always happen together
t(X) t(Y): transaction having X always has Y
Using diffset to accelerate mining
Only keep track of differences of tids
t(X) = {T1, T2, T3}, t(XY) = {T1, T3}
Diffset (XY, X) = {T2}
Eclat (Zaki et al. @KDD’97)
Mining Closed patterns using vertical format: CHARM (Zaki &
Hsiao@SDM’02)
44
Scalable Frequent Itemset Mining Methods
45
Mining Frequent Closed Patterns: CLOSET
Flist: list of all frequent items in support ascending order
Flist: d-a-f-e-c Min_sup=2
TID Items
Divide search space 10 a, c, d, e, f
20 a, b, e
Patterns having d
30 c, e, f
Patterns having d but no a, etc. 40 a, c, d, f
50 c, e, f
Find frequent closed pattern recursively
Every transaction having d also has cfa cfad is a
frequent closed pattern
J. Pei, J. Han & R. Mao. “CLOSET: An Efficient Algorithm for
Mining Frequent Closed Itemsets", DMKD'00.
CLOSET+: Mining Closed Itemsets by Pattern-Growth
50
Visualization of Association Rules: Rule Graph
51
Visualization of Association Rules
(SGI/MineSet 3.0)
52
Computational Complexity of Frequent Itemset Mining
53
Chapter 5: Mining Frequent Patterns, Association and
Correlations: Basic Concepts and Methods
Basic Concepts
Evaluation Methods
Summary
54
Interestingness Measure: Correlations (Lift)
play basketball eat cereal [40%, 66.7%] is misleading
The overall % of students eating cereal is 75% > 66.7%.
play basketball not eat cereal [20%, 33.3%] is more accurate,
although with lower support and confidence
Measure of dependent/correlated events: lift
55
Are lift and 2 Good Measures of Correlation?
56
Null-Invariant Measures
57
Comparison of Interestingness Measures
Null-(transaction) invariance is crucial for correlation analysis
Lift and 2 are not null-invariant
5 null-invariant measures
Basic Concepts
Evaluation Methods
Summary
61
Summary
62
January 3, 2021 Data Mining: Concepts and Techniques 63
Ref: Basic Concepts of Frequent Pattern Mining
64
Ref: Apriori and Its Improvements
R. Agrawal and R. Srikant. Fast algorithms for mining association rules.
VLDB'94.
H. Mannila, H. Toivonen, and A. I. Verkamo. Efficient algorithms for
discovering association rules. KDD'94.
A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for
mining association rules in large databases. VLDB'95.
J. S. Park, M. S. Chen, and P. S. Yu. An effective hash-based algorithm for
mining association rules. SIGMOD'95.
H. Toivonen. Sampling large databases for association rules. VLDB'96.
S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset counting
and implication rules for market basket analysis. SIGMOD'97.
S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule
mining with relational database systems: Alternatives and implications.
SIGMOD'98.
65
Ref: Depth-First, Projection-Based FP Mining
R. Agarwal, C. Aggarwal, and V. V. V. Prasad. A tree projection algorithm for
generation of frequent itemsets. J. Parallel and Distributed Computing:02.
J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate generation .
SIGMOD’ 00.
J. Liu, Y. Pan, K. Wang, and J. Han. Mining Frequent Item Sets by Opportunistic
Projection. KDD'02.
J. Han, J. Wang, Y. Lu, and P. Tzvetkov. Mining Top-K Frequent Closed Patterns
without Minimum Support. ICDM'02.
J. Wang, J. Han, and J. Pei. CLOSET+: Searching for the Best Strategies for
Mining Frequent Closed Itemsets. KDD'03.
G. Liu, H. Lu, W. Lou, J. X. Yu. On Computing, Storing and Querying Frequent
Patterns. KDD'03.
G. Grahne and J. Zhu, Efficiently Using Prefix-Trees in Mining Frequent Itemsets,
Proc. ICDM'03 Int. Workshop on Frequent Itemset Mining Implementations
(FIMI'03), Melbourne, FL, Nov. 2003
66
Ref: Vertical Format and Row Enumeration Methods
69