Unit 4 Heap
Unit 4 Heap
UNIT 4 – Heaps
SUYASH BHARDWAJ
FACULTY OF ENGINEERING AND TECHNOLOGY
GURUKUL KANGRI VISHWAVIDYALAYA, HARIDWAR
Content
• Mergeable Heaps : Mergeble Heap
Operations, Binomial Trees Implementing
Binomial Heaps and its Operations, 2-3-4.
Trees. Structure and Potential Function of
Fibonacci Heap Implementing Fibonacci Heap
Definition
• In computer science, a mergeable heap is an
abstract data type, which is a heap
supporting a merge operation.
Summary of Heap ADT Analysis
• Consider a heap of N nodes
• Space needed: O(N)
– Actually, O(MaxSize) where MaxSize is the size of the array
– Pointer-based implementation: pointers for children and
parent
• Total space = 3N + 1 (3 pointers per node + 1 for size)
• FindMin: O(1) time; DeleteMin and Insert: O(log N) time
• BuildHeap from N inputs: What is the run time?
– N Insert operations = O(N log N)
– O(N): Treat input array as a heap and fix it using percolate
down
– percolate down. Running Time: O(log N)
– E.g. Schedulers in OS often decrease priority of CPU-hogging jobs
Operation
A mergeable heap supports the following operations:[1]
• Make-Heap(), creating an empty heap.
• Insert(H,x), inserting an element x into the heap H.
• Min(H), returning the minimum element, or Nil if no
such element exists.
• Extract-Min(H), extracting and returning the minimum
element, or Nil if no such element exists.
• Merge(H1,H2), combining the elements of H1 and H2.
General Implementation
• It is straightforward to implement a mergeable heap
given a simple heap:
Merge(H1,H2):
1. x ← Extract-Min(H2)
2. while x ≠ Nil
1. Insert(H1, x)
2. x ← Extract-Min(H2)
• This can however be wasteful as each Extract-Min(H)
and Insert(H,x) typically have to maintain the heap
property.
More efficient implementations
• Binary heaps
• Binomial heaps
• Fibonacci heaps
• Pairing heaps
Binary Heap Properties
1. Structure Property
2. Ordering Property
8
Complete Binary Tree
A Perfect binary tree – A binary tree with all
leaf nodes at the same depth. All internal
nodes have 2 children.
height h
11 2h+1 – 1 nodes
2h – 1 non-leaves
2h leaves
5 21
2 9 16 25
1 3 7 10 13 19 22 30
9
Heap Structure Property
• A binary heap is a complete binary tree.
Complete binary tree – binary tree that is
completely filled, with the possible exception of
the bottom level, which is filled left to right.
Examples:
10
Representing Complete
Binary Trees in an Array
1
A
2 3
From node i:
B C
4 5 6 7 left child:
D E F G right child:
8 9 10 11 12
parent:
H I J K L
11
Heap Order Property
Heap order property: For every non-root
node X, the value in the parent of X is less
than (or equal to) the value in X.
10
10
20 80
20 80
40 60 85 99
30 15
50 700
not a heap
12
Heap Operations
• findMin:
• insert(val): percolate up.
• deleteMin: percolate down.
10
20 80
40 60 85 99
50 700 65
13
Heap – Insert(val)
Basic Idea:
1. Put val at “next” leaf position
2. Percolate up by repeatedly exchanging node
until no longer needed
14
Insert: percolate up
10
20 80
40 60 85 99
50 700 65 15
10
15 80
40 20 85 99
50 700 65 60
15
Heap – Deletemin
Basic Idea:
1. Remove root (that is always the min!)
2. Put “last” leaf node at root
3. Find smallest child of node
4. Swap node with its smallest child if needed.
5. Repeat steps 3 & 4 until no swaps needed.
16
DeleteMin: percolate down
10
20 15
40 60 85 99
50 700 65
65
20 15
40 60 85 99
50 700 65
17
DeleteMin: percolate down
65
20 15
40 60 85 99
50 700
15
20 65
40 60 85 99
50 700
18
Building a Heap
• Adding the items one at a time is O(n log n) in
the worst case
19
Working on Heaps
• What are the two properties of a heap?
– Structure Property
– Order Property
20
BuildHeap: Floyd’s Method
12 5 11 3 10 6 9 4 8 1 7 2
12
5 11
3 10 6 9
4 8 1 7 2
21
Buildheap pseudocode
22
BuildHeap: Floyd’s Method
12
5 11
3 10 6 9
4 8 1 7 2
23
BuildHeap: Floyd’s Method
12
5 11
3 10 2 9
4 8 1 7 6
24
BuildHeap: Floyd’s Method
12 12
5 11 5 11
3 10 2 9 3 1 2 9
4 8 1 7 6 4 8 10 7 6
25
BuildHeap: Floyd’s Method
12 12
5 11 5 11
3 10 2 9 3 1 2 9
4 8 1 7 6 4 8 10 7 6
12
5 2
3 1 6 9
4 8 10 7 11 26
BuildHeap: Floyd’s Method
12 12
5 11 5 11
3 10 2 9 3 1 2 9
4 8 1 7 6 4 8 10 7 6
12 12
5 2 1 2
3 1 6 9 3 5 6 9
4 8 10 7 11 4 8 10 7 11 27
Finally…
1
3 2
4 5 6 9
12 8 10 7 11
28
Facts about Heaps
Observations:
• Finding a child/parent index is a multiply/divide by two
• Operations jump widely through the heap
• Each percolate step looks at only two new nodes
• Inserts are at least as common as deleteMins
Realities:
• Division/multiplication by powers of two are equally fast
• Looking at only two new pieces of data: bad for cache!
• With huge data sets, disk accesses dominate
29
Operation: Merge
Given two heaps, merge them into one heap
– first attempt: insert each element of the smaller
heap into the larger.
30
Merge two heaps (basic idea)
• Put the smaller root as the new root,
• Hang its left subtree on the left.
• Recursively merge its right subtree and the
other tree.
31
Leftist Heaps
Idea:
Focus all heap maintenance work in one
small part of the heap
Leftist heaps:
1. Most nodes are on the left
2. All the merging work is done on the right
32
Leftist Heap Properties
• Heap-order property
– parent’s priority value is to childrens’ priority values
– result: minimum element is at the root
• Leftist property
– For every node x, npl(left(x)) npl(right(x))
– result: tree is at least as “heavy” on the left as the right
33
Are These Leftist?
2 2 0
1 1 1 1 0
0 1 0 0 1 0 0 0 1
0 0 0 0 0 0 0 0
0
Every subtree of a leftist
0
tree is leftist!
0
34
Merging Two Leftist Heaps
• merge(T1,T2) returns one leftist heap containing all
elements of the two (distinct) leftist heaps T1 and T2
merge
T1 a a
merge
L1 R1 a<b L1 R1
T2 b b
L2 R2 L2 R2 35
Merge Continued
a a
If npl(R’) > npl(L1)
L1 R’ R’ L1
R’ = Merge(R1, T2)
runtime:
36
Operations on Leftist Heaps
• merge with two trees of total size n: O(log n)
• insert with heap size n: O(log n)
– pretend node is a size 1 leftist heap
– insert by merging original heap with one node heap
merge
merge
37
Leftest Merge Example
merge
?
1 3
5
0 0
0 merge
10 12 7 1 ?
5 5
0
1 14
3 0 0 0 merge
10 12 10 0
0 0 12
7 8 0 0
8 8
0
14
0
(special case) 8
0
12
38
Sewing Up the Example
? ? 1
3 3 3
0 0 0
7 ? 7 1 7 5 1
5 5
0 0 0 0 0
14 0 0 0
10 0 14 14 10 8
8 10 8
0
0 0 12
12 12
Done?
39
Finally…
1 1
3 3
0 0
7 5 1 5 1 7
0 0 0 0 0 0
14 10 8 10 8 14
0 0
12 12
40
Skew Heaps
Problems with leftist heaps
– extra storage for npl
– extra complexity/logic to maintain and check npl
– right side is “often” heavy and requires a switch
Solution: skew heaps
– “blindly” adjusting version of leftist heaps
– merge always switches children when fixing right path
– amortized time for: merge, insert, deleteMin = O(log n)
– however, worst case time for all three = O(n)
41
Merging Two Skew Heaps
merge
T1 a a
merge
L1 R1 a<b R1 L1
T2 b b
L2 R2 L2 R2
8 10 14
12 43
Runtime Analysis:
Worst-case and Amortized
• No worst case guarantee on right path length!
• All operations rely on merge
44
Comparing Heaps
• Binary Heaps • Leftist Heaps
46
Yet Another Data Structure:
Binomial Queues
• Structural property
– Forest of binomial trees with at most
one tree of any height
What’s a forest?
• Order property
– Each binomial tree has the heap-order property
47
The Binomial Tree, Bh
• Bh has height h and exactly 2h nodes
• Bh is formed by making Bh-1 a child of another Bh-1
• Root has exactly h children
• Number of nodes at depth d is binomial coeff.
h
– Hence the name; we will not use this last property
d
B0 B1 B2 B3
48
Binomial Queue with n elements
Binomial Q with n elements has a unique structural
representation in terms of binomial trees!
1 B3 1 B2 No B1 1 B0
49
Properties of Binomial Queue
• At most one binomial tree of any height
50
Operations on Binomial Queue
• Will again define merge as the base operation
– insert, deleteMin, buildBinomialQ will use merge
51
Merging Two Binomial Queues
Essentially like adding two binary numbers!
21 1 -1 3 5
7 2 1 3 9 6
8 11 5 7
53
Example: Binomial Queue Merge
H1: H2:
1 -1 3 5
7 2 1 3 21 9 6
8 11 5 7
54
Example: Binomial Queue Merge
H1: H2:
1 -1 5
7 3 2 1 3 9 6
21 8 11 5 7
55
Example: Binomial Queue Merge
H1: H2:
1 -1
7 3 5 2 1 3
21 9 6 8 11 5
7 6
56
Example: Binomial Queue Merge
H1: H2:
-1
2 1 3 1
8 11 5 7 3 5
6 21 9 6
57
Example: Binomial Queue Merge
H1: H2:
-1
2 1 3 1
8 11 5 7 3 5
6 21 9 6
58
Complexity of Merge
Constant time for each height
Max number of heights is: log n
59
Insert in a Binomial Queue
Insert(x): Similar to leftist or skew heap
runtime
Worst case complexity: same as merge
O( )
60
deleteMin in Binomial Queue
Similar to leftist and skew heaps….
61
deleteMin: Example
BQ 7
3
4
8 5
Result:
7
4
8 5
runtime:
63
Binomial Heaps
DATA STRUCTURES: MERGEABLE HEAPS
• MAKE-HEAP ( )
– Creates & returns a new heap with no elements.
• INSERT (H,x)
– Inserts a node x whose key field has already been
filled into heap H.
• MINIMUM (H)
– Returns a pointer to the node in heap H whose key
is minimum.
CS 473 Lecture X 64
Mergeable Heaps
• EXTRACT-MIN (H)
– Deletes the node from heap H whose key is
minimum. Returns a pointer to the node.
• DECREASE-KEY (H, x, k)
– Assigns to node x within heap H the new value k
where k is smaller than its current key value.
CS 473 Lecture X 65
Mergeable Heaps
• DELETE (H, x)
– Deletes node x from heap H.
CS 473 Lecture X 66
Binomial Trees
• A binomial heap is a collection of binomial
trees.
• The binomial tree Bk is an ordered tree defined
recursively
Bo Consists of a single node
.
.
.
Bk Consists of two binominal trees Bk-1
linked together. Root of one is the
leftmost child of the root of the
other.
CS 473 Lecture X 67
Binomial Trees
B k-1
B k-1
Bk
CS 473 Lecture X 68
Binomial Trees B
2
B1
B1 B0
B1
B0 B1 B2 B3
B3
B2 B0
B1
B4
CS 473 Lecture X 69
Binomial Trees
B1 Bo
B2
Bk-2
Bk-1
Bk
CS 473 Lecture X 70
Properties of Binomial Trees
LEMMA: For the binomial tree Bk ;
1. There are 2k nodes,
2. The height of tree is k,
3. There are exactly k nodes at depth i for
i = 0,1,..,k and i
4. The root has degree k > degree of any other
node if the children of the root are numbered
from left to right as k-1, k-2,...,0; child i is the
root of a subtree Bi.
CS 473 Lecture X 71
Properties of Binomial Trees
PROOF: By induction on k
Each property holds for the basis B 0
INDUCTIVE STEP: assume that Lemma
holds for Bk-1
1. Bk consists of two copies of Bk-1
| Bk | = | Bk-1 | + | Bk-1| = 2k-1 +2k-1 = 2k
2. hk-1 = Height (Bk-1) = k-1 by induction
hk=hk-1+1 = k-1 +1 = k
CS 473 Lecture X 72
Properties of Binomial Trees
3. Let D(k,i) denote the number of nodes at depth i of a Bk ;
d=1
d=i-1 d=i d=i
Bk-1
Bk-1 true by induction
k-1 k
D(k,i)=D(k-1,i -1) + D(k-1,i) = k-1
+ =
i -1 i i
CS 473 Lecture X 73
Properties of Binomial Trees(Cont.)
CS 473 Lecture X 74
Properties of Binomial Trees (Cont.)
B1 B0
Bk-3 B2
Bk-2
B 1 B0
Bk-3 B2
Bk-2
Bk-1
CS 473 Lecture X 75
Properties of Binomial Trees (Cont.)
• COROLLARY: The maximum degree of any
node in an n-node binomial tree is lg(n)
CS 473 Lecture X 76
Binomial Heaps
A BINOMIAL HEAP H is a set of BINOMIAL
TREES that satisfies the following “Binomial
Heap Properties”
1. Each binomial tree in H is HEAP-ORDERED
• the key of a node is ≥ the key of the parent
• Root of each binomial tree in H contains the
smallest key in that tree.
CS 473 Lecture X 77
Binomial Heaps
2. There is at most one binomial tree in H whose
root has a given degree,
– n-node binomial heap H consists of at most
[lgn] + 1 binomial trees.
– Binary represantation of n has lg(n) + 1 bits,
13 =< 1, 1, 0, 1>2
Consists of B0, B2, B3
head[H]
10 1 6
B0 12 25 8 14 29
18
B2 11 17 38
B3
27
CS 473 Lecture X 79
Representation of Binomial Heaps
• Each binomial tree within a binomial heap is stored in
the left-child, right-sibling representation
• Each node X contains POINTERS
– p[x] to its parent
– child[x] to its leftmost child
– sibling[x] to its immediately right sibling
• Each node X also contains the field degree[x] which
denotes the number of children of X.
CS 473 Lecture X 80
Representation of Binomial Heaps
HEAD [H]
10 10
10 ROOT LIST (LINKED LIST)
10 10 10
parent
key 10 10 10 10 10
degree 10 10 10 10 10
child 10 10 10 10
10 sibling
10 10 10
10
CS 473 Lecture X 81
Representation of Binomial Heaps
CS 473 Lecture X 82
Operations on Binomial Heaps
CREATING A NEW BINOMIAL HEAP
MAKE-BINOMIAL-HEAP ( )
allocate H RUNNING-TIME= Θ(1)
head [ H ] NIL
return H
end
CS 473 Lecture X 83
Operations on Binomial Heaps
BINOMIAL-HEAP-MINIMUM (H)
x Head [H]
min key [x]
x sibling [x]
while x ≠ NIL do
if key [x] < min then
min key [x]
yx
endif
x sibling [x]
endwhile
return y
end
CS 473 Lecture X 84
Operations on Binomial Heaps
RUNNING–TIME = O (lgn)
CS 473 Lecture X 85
Uniting Two Binomial Heaps
BINOMIAL-HEAP-UNION
Procedure repeatedly link binomial trees whose roots
have the same degree
BINOMIAL-LINK
Procedure links the Bk-1 tree rooted at node y to
the Bk-1 tree rooted at node z it makes z the parent of y
CS 473 Lecture X 86
Uniting Two Binomial Heaps
BINOMIAL-LINK (y,z)
p [y] z
sibling [y] child [z]
child [z] y
degree [z] degree [z] + 1
end
CS 473 Lecture X 87
Uniting Two Binomial Heaps
NIL
z
+1
child[z]
NIL
p[y]
sibling [y]
CS 473 Lecture X 88
Uniting Two Binomial Heaps: Cases
We maintain 3 pointers into the root list
CS 473 Lecture X 89
Uniting Two Binomial Heaps
• Initially, there are at most two roots of the same
degree
• Binomial-heap-merge guarantees that if two roots in h
have the same degree they are adjacent in the root list
• During the execution of union, there may be three
roots of the same degree appearing on the root list at
some time
CS 473 Lecture X 90
Uniting Two Binomial Heaps
CASE 1: Occurs when degree [x] ≠ degree [next-x]
Bk Bl
l >k
prev-x x next-x
a b c d
Bk Bl
CS 473 Lecture X 91
Uniting Two Binomial Heaps: Cases
CASE 2: Occurs when x is the first of 3 roots of equal degree
degree
prev-x
[x] = xdegree [next-x]
next-x
= degree [sibling[next-x]]
sibling [next-x]
a b c d
BK BK BK
prev-x x next-x
a b c d
BK BK BK
CS 473 Lecture X 92
Uniting Two Binomial Heaps: Cases
CASE 3 & 4: Occur when x is the first of 2 roots of equal degree
degree [x] = degree [next-x] ≠ degree [sibling [next-x]]
• The root with the smaller key becomes the root of the linked tree
CS 473 Lecture X 93
Uniting Two Binomial Heaps: Cases
CASE 3 & 4 CONTINUED
prev-x x next-x sibling [next-x]
a b c d
Bk Bk Bl l>k
prev-x x next-x
CASE 3
a b d
key [b] ≤ key [c]
c
prev-x x next-x
CASE 4
a c d
b key [c] ≤ key [b]
CS 473 Lecture X 94
Uniting Two Binomial Heaps: Cases
The running time of binomial-heap-union operation is
O (lgn)
CS 473 Lecture X 95
Uniting Two Binomial Heaps: Cases
• So H contains at most
lgn1 + lgn2 +2 ≤ 2 lgn +2= O (lgn) roots
immediately after BINOMIAL-HEAP-MERGE
CS 473 Lecture X 96
Binomial-Heap-Union Procedure
BINOMIAL-HEAP-MERGE PROCEDURE
CS 473 Lecture X 97
Binomial-Heap-Union Procedure
BINOMIAL-HEAP-UNION (H1,H2)
H MAKE-BINOMIAL-HEAP ( )
head [ H ] BINOMIAL-HEAP-MERGE (H1,H2)
free the objects H1 & H2 but not the lists they point to
prev-x NIL
x HEAD [H]
next-x sibling [x]
while next-x ≠ NIL do
if ( degree [x] ≠ degree [next-x] OR
(sibling [next-x] ≠ NIL and degree[sibling [next-x]] = degree [x]) then
prev-x x CASE 1 and 2
x next-x CASE 1 and 2
elseif key [x] ≤ key [next-x] then
sibling [x] sibling [next -x] CASE 3
CS 473 Lecture X 98
Binomial-Heap-Union Procedure (Cont.)
BINOMIAL- LINK (next-x, x) CASE 3
else
if prev-x = NIL then
head [H] next-x CASE 4
else CASE 4
sibling [prev-x] next-x CASE 4
endif
BINOMIAL-LINK(x, next-x)
CASE 4
x next-x CASE 4
endif
next-x sibling [x]
endwhile
return H
end
CS 473 Lecture X 99
Uniting Two Binomial Heaps vs
Adding Two Binary Numbers
H1 with n1 NODES : H1 =
H2 with n2 NODES : H2 =
5 4 3 2 1 0
CASE2 MARCH B1
then CASE3 and B2
CASE4 LINK x next-x
Cin=1 B0 B2 B2 B2 B4 B5 B5
1+1=11
CASE1
B2 B3
MARCH
Cin=1
0+0=1
x next-x
B0 B2 B3 B4 B5 B5
CASE1
MARCH
Cin=0
x next-x
0+1=1
B0 B2 B3 B4 B5 B5
CASE3 OR 4 x
LINK Cin=0 B0 B2 B3 B4 B5
1+0=10
B5 B6
CS 473 Lecture X 102
Inserting a Node
BINOMIAL-HEAP-INSERT (H,x)
MERGE x next-x
( H,H’) B0 B0 B1 B4 B5
5 4 3 2 1 0
x next-x
LINK B0 B1 B4 B5 1
1 1 0 0 1 1
B0 B2 B4 B5
1
LINK B1 B4 B5
+
B1
1 1 0 1 0 0
CS 473 Lecture X 104
A Direct Implementation that does not Call
Binomial-Heap-Union
- More effıcient
- Case 2 never occurs
- While loop should terminate whenever
case 1 is encountered
BINOMIAL-HEAP-EXTRACT-MIN (H)
(1) find the root x with the minimum key in the
root list of H and remove x from the root list of H
(2) H’ MAKE-BINOMIAL-HEAP ( )
(3) reverse the order of the linked list of x’ children
and set head [H’] head of the resulting list
(4) H BINOMIAL-HEAP-UNION (H, H’)
return x
end
x
head [H]
B0 B1
B4
B1 B0
B2
x
head [H]
B0 B1
B4
B2 B1 B0
head [H’]
H’ ← MAKE-BINOMIAL-HEAP
remove root z from the root list of H
reverse the order of the linked list of z’s children
set head [H’] ← head of the resulting list
H ← BINOMIAL-HEAP-UNION (H, H’)
end
min[H]
23 7 3 17 24
18 52 38 30 26 46
marked Marked
nodes node
39 41 35
Structure of Fibonacci Heaps
min[H]
23 7 3 17 24
18 52 38 26
39 41 35
Concetenation of Two Circular, Doubly –
Linked Lists
min[H1]
b
a c d e
(x)
q
p r s
(y)
min[H2]
Concetenation of Two Circular, Doubly –
Linked Lists
min[H2] min[H1]
a b r s p q c d d
Concetenation of Two Circular, Doubly –
Linked Lists
CONCATENATE (H1, H2)
x ← left[min[H1]] Running time
y ← left[min[H2]]
is O(1)
right[x] ← min[H2]
left[min[H2]] ← x
right[y] ← min[H1]
left[min[H1]] ← y
end
Potential Function
• A given fibonacci heap H
– t(H): the number of trees in root list of H
– m(H): the number of marked nodes in H
• The potential of fibonacci heap H is:
Φ(H) = t(H) + 2 m(H)
• A fibonacci heap application begins with an empty
heap:
the initial potential = 0
• The potential is non-negative at all subsequent times.
Maximum Degree
• We will assume that there is aknown upper
bound D(n) on the maximum degree of any
node in an n node heap
• If only mergeable-heap operations are
supported
D (n) lg n
• If decrease key & delete operations are
supported
D(n) = O(lg n)
Mergeable Heap Operations
MAKE-HEAP, INSERT, MINIMUM, EXTRACT-MIN,
UNION
If only these operations are to be supported,
each fibonacci-heap is a collection of
unordered binomial trees.
Mergeable Heap Operations
• An unordered binomial tree Uk
– is like a binomial tree
– defined recursively:
• U0 consists of a single node
• Uk consists of two Uk-1’s for which the root of one is
made into any child of the root of the other
Mergeable Heap Operations
• Lemma which gives properties of binomial trees
holds for unordered binomial trees as well but with
the following variation on property 4
• Property 4’: For the unordered binomial tree Uk:
– The root has degree k > the degree of any other node
– The children of the root are the roots of subtrees
U0, U1, ..........,Uk-1 in some order
Mergeable Heap Operations
• The key idea in the mergeable heap operations on
fibonacci heaps is to delay work as long as possible.
• Performance trade-off among implementations of
the various operations:
– If the number of trees is small we can quickly determine
the new min node during EXTRACT-MIN
– However we pay a price for ensuring that the number of
trees is small
Mergeable Heap Operations
• However we pay a price for ensuring that the
number of trees is small
• However it can take up to Ω(lg n) time
– to insert a node into a binomial heap
– or to unite two binomial heaps
• We do not consolidate trees in a fibonacci heap
when we insert a new node or unite two heaps
• We delay the consolidation for the EXTRACT-MIN
operation when we really need to find the new
minimum node.
Mergeable Heap Operations
Creating a new fibonacci heap:
MAKE-FIB-HEAP procedure
– allocates and returns the fibonacci heap object H
– Where n[H] = 0 and min[H] = NIL
– There are no trees in the heap
because t(H) = 0 and m(H) = 0 => Φ(H) = 0
the amortized cost = O(1) = the actual cost
Mergeable Heap Operations
Inserting a node
FIB-HEAP-INSERT(H, x)
degree[x] ← 0
p[x] ← NIL
child[x] ← NIL
left[x] ← x
right[x] ← x
mark[x] ← FALSE
concatenate the root list containing x with root list H
if key[x] < key[min[H]] then
min[H] ← x
endif
n[H] ← n[H] + 1
end
Mergeable Heap Operations
min[H]
H
x 21
23 7 3 17 24
18 52 38 30 26 46
39 41 35
Mergeable Heap Operations
min[H’]
H’
23 7 21 3 17 24
18 52 38 30 26 46
39 41 35
Mergeable Heap Operations
t(H’) = t(H) + 1
• Increase in potential:
Φ(H’) - Φ(H) = [t(H) + 1 + 2m(H)] – [t(H)
+ 2m(H)]
=1
The actual cost = O(1)
The amortized cost = O(1) + 1 = O(1)
Mergeable Heap Operations
Finding the minimum node:
Given by pointer min[H]
actual cost = O(1)
amortized cost = actual cost = O(1)
since the potential of H does not change
Uniting Two Fibonacci Heaps
FIB-HEAP-UNION(H1, H2)
H = MAKE-FIB-HEAP()
if key[min[H1]] ≤ key[min[H2]] then
min[H] ← min[H1]
else
min[H] ← min[H2]
endif
concatenate the root lists of H1 and H2
n[H] ← n[H1] + n[H2]
Free the objects H1 and H2
return H
end
Uniting Two Fibonacci Heaps
• No consolidation of trees
• Actual cost = O(1)
• Change in potential
Φ(H) – (Φ(H1) + Φ(H2)) =
= (t(H) + 2m(H)) – ((t(H1) + 2m(H1)) +
(t(H2) + 2m(H2)))
= 0 since t(H) = t(H1) + t(H2)
m(H) = m(H1) + m(H2)
Therefore amortized cost = actual cost = O(1)
Extracting the Minimum Node
The most complicated operation the delayed work of consolidating the trees
in the root list occurs
FIB-HEAP-EXTRACT-MIN(H)
z = min[H]
for each child x of z
add x to the root list of H
p[x] ← NIL
endfor
remove z from the root list of H
min[H] ← right[z]
CONSOLIDATE(H)
end
Extracting the Minimum Node
• Repeatedly execute the following steps until every root in the
root list has a distinct degree value
(1) Find two roots x and y in the root list with the same degree
where key[x] ≤ key[y]
(2) Link y to x : Remove y from the root list and make y a
child of x
This operation is performed by procedure FIB-HEAP-LINK
Procedure CONSOLIDATE uses an auxiliary pointer array
A[0......D(n)]
A[i] = y : y is currently a root with degree[y] = i
Extracting the Minimum Node
CONSOLIDATE(H)
for i← 0 to D(n) do
A[i] ← N IL min[H] ← +∞
endfor
for each node w in the root list of H do for i ← 0 to D(n[H]) do
x←w if A[i] ≠ NIL then
d ← degree[x]
add A[i] to the root list of H
while A[d] ≠ NIL do
y ← A[d] if key[A[i]] < key[min[H]] then
if key[x] > key[y] then
min[H] ← A[i]
exchange x
↔y endif
endif
endif
FIB-HEAP-LINK(H,y,x)
A[d] ← NIL endfor
d←d+1
end
endwhile
A[d] ← x
endfor
Extracting the Minimum Node
FIB-HEAP-LINK(H,y,x)
remove y from the root list of H
make y a child of x, incrementing degree[x]
mark[y] ← FALSE
end
Extracting the Minimum Node
min[H]
23 7 21 3 17 24
18 52 38 30 26 46
39 41 35
Extracting the Minimum Node
min[H]
23 7 21 18 52 38 17 24
39 41 30 26 46
35
Extracting the Minimum Node
0 1 2 3 4
A
w, x
23 7 21 18 52 38 17 24
39 41 30 26 46
35
Extracting the Minimum Node
0 1 2 3 4
A
w, x
23 7 21 18 52 38 17 24
39 41 30 26 46
35
Extracting the Minimum Node
0 1 2 3 4
A
w, x
23 7 21 18 52 38 17 24
39 41 30 26 46
35
Extracting the Minimum Node
0 1 2 3 4
A
w 7 21 18 52 38 17 24
x 23 39 41 30 26 46
35
Extracting the Minimum Node
0 1 2 3 4
A
x
7 21 18 52 38 24
17 23 w 39 41 26 46
35
30
Extracting the Minimum Node
0 1 2 3 4
A
x
7 21 18 52 38
24 17 23 w 39 41
26 46 30
35
Extracting the Minimum Node
w, x
7 21 18 52 38
24 17 23 39 41
26 46 30
35
Extracting the Minimum Node
0 1 2 3 4
A
w, x
7 21 18 38
24 17 23 52 39 41
26 46 30
35
Extracting the Minimum Node
0 1 2 3 4
A
7 18 w, x 38
24 17 23 21 39 41
26 46 30 52
35
Extracting the Minimum Node
0 1 2 3 4
A
w, x
7 18 38
24 17 23 21 39 41
26 46 30 52
35
Extracting the Minimum Node
min[H]
7 18 38
24 17 23 21 39 41
26 46 30 52
35
Analysis of the
FIB-HEAP-EXTRACT-MIN Procedure
• If all trees in the fib-heap are unordered binomial trees before
the execution of the EXTRACT-MIN operation
then they are all unordered binomial trees afterward.
• There are two ways in which trees are changed:
(1) each child of the extracted root node becomes
a child, each new tree is itself an unordered binomial
tree.
(2) trees are linked by FIB-HEAP-LINK procedure only if
they have the same degree hence Uk is linked to Uk to form a
Uk+1
Complexity Analysis of the
FIB-HEAP-EXTRACT-MIN Procedure
Actual Cost
1-st for loop: Contributes O(D(n))
3-rd for loop: Contributes O(D(n))
2-nd for loop:
Size of the root-list upon calling CONSOLIDATE is at most:
D(n) + t(H) - 1
D(n): upper bound on the number of children of the extracted
node
t(H) – 1: original t(H) root list nodes – the extracted node
Complexity Analysis of the
FIB-HEAP-EXTRACT-MIN Procedure
Each iteration of the inner while-loop links one
root to another thus reducing the size of the
root list by 1
Therefore the total amount work performed in
the 2-nd for loop is at most proportional to
D(n) + t(H)
Thus, the total actual cost is O(D(n) + t(H))
Complexity Analysis of the
FIB-HEAP-EXTRACT-MIN Procedure
Amortized Cost
Potential before: t(H) + 2m(H)
Potential after: at most (D(n) + 1)+2m(H) since at most
D(n)+1 roots remain & no nodes marked
Amortized cost = O(D(n) + t(H)) +
[(D(n) + 1) + 2m(H)] –
[t(H) + 2m(H)]
= O(D(n)) + O(t(H)) – D(n) – t(H)
= O(D(n))
Complexity Analysis of the
FIB-HEAP-EXTRACT-MIN Procedure
FIB-HEAP-LINK ( H, y, x )
remove y from the root list of H
make y a child of x, incrementing degree [x]
mark [ y ] FALSE
end
EXTRACT-MIN Procedure for Fibonacci
Heaps
CONSOLIDATE ( H )
for i 0 to D ( n ( H ) )
A[ i ] NIL
endfor min [ H ] NIL
for each node w in the root list of H do for i 0 to D ( n [ H ] ) do
xw if A [ i ] ≠ NIL then
d degree [ x ] Add A [ i ] to the root list of H
while A [ d ] ≠ NIL do if min [ H ] = NIL or key [ A [ i ] ] <
yA[d] key [ min [ H ] ] then
if key [ x ] > key [ y ] then min [ H ] A [ i ]
exchange x endif
↔y endif
endif endfor
FIB-HEAP-LINK ( H , y, x ) end
A [ d ] NIL
dd+1
endwhile
A[d]x
endfor
Bounding the Maximum Degree
For each node x within a fibonacci heap, define
size(x): the number of nodes, including
itself, in the subtree rooted at x
NOTE: x need not to be in the root list, it can be
any node at all.
We shall show that size(x) is exponential in
degree[x]
Bounding the Maximum Degree
Lemma 1: Let x be a node with degree[x]=k
Let y1,y2,....,yk denote the children of x in the
order in which they are linked to x, from
earliest to the latest, then
degree[y1] ≥ 0 and degree[yi] ≥ i-2
for i = 2,3,...,k
Bounding the Maximum Degree
Proof: degree[y1] ≥ 0 =>
obvious
For i ≥ 2:
LIN
K
y1 y2 z1 y3 z2 yi-1 yi
Bounding the Maximum Degree
• When yi is linked to x: at least y1,y2,....,yi-1 were all
children of x so we must have had degree[x] ≥ i – 1
• NOTE: z node(s) denotes the node(s)
that were children of x just before the
link of yi that are lost after the link of yi
• When yi is linked to x:
degree[yi] = degree[x] ≥ i – 1
since then, node yi has lost at most one child, we
conclude that degree[yi] ≥ i-2
Bounding the Maximum Degree
Fibonacci Numbers
0 if k 0
Fk 1 if k 1
F
k 1 Fk 2 if k 2
Bounding the Maximum Degree
Lemma 2: For all integers
k≥0
k
Fk 2 1
i0
Fi
Proof: By induction on k
When k = 0:
o
1 Fi 1 F0 1 0 1 F2
i 0
Bounding the Maximum Degree
k1
Inductive Hypothesis: Fk 1 1 F i
i 0
Fk 2 Fk Fk 1
k1
Fk (1 Fi )
Recall that Fk 2 k
i 0
k where
1 Fi
i 0
1 5
1.61803 is the golden ratio
2
Bounding the Maximum Degree
Lemma 3: Let x be any node in a fibonacci heap with
degree[x] = k then
size(x) ≥ Fk+2 ≥ Φk, where Φ = (1+√5)/2
Proof: Let Sk denote the lower bound on size(z) over all
nodes z such that
degree[z] = k
Trivially, S0 = 1, S1 = 2, S2 = 3
Note that,Sk ≤ size(z) for any node z with degree[z] = k
Bounding the Maximum Degree
As in Lemma-1, let y1,y2,....,yk denote the children of
node x in the order in which they were linked to x
k k
size ( x) S k 1 1 S i 2 2 Si 2
i 2 i 2
CUT(H, x, y)
remove x from the child list of y, decrementing
degree y
add x to the root list of H
p[x] ← NIL
mark[x] ← FALSE
end
Decreasing a Key
CASCADING-CUT(H, y)
z ← p[y]
if z ≠ NIL then
if mark[y] = FALSE then
mark[y] = TRUE
else
CUT(H, y, z)
CASCADING-CUT(H, z)
endif
endif
end
Decreasing a Key
min[H]
z T
7 18 38
15
CUT(H, 46, 24)
y
24 17 23 21 39 41
T
26 46 30 52
x
key[x] is decreased to 15 (no cascading cuts)
35
Decreasing a Key
min[H]
15 7 18 38
5
CUT(H, 35, 26*)
T
z 23 21 39 41
24 17
y
T
26 30 52
x
35 key[x] => 5 will invoke 2 cascading cuts
Decreasing a Key
15 5 7 18 38
CASCADING-CUT
T z 23 21 39 41
24 17
y
T
26 30 52
Decreasing a Key
z
15 5 26 7 18 38
CASCADING-CUT
y
24 17 23 21 39 41
30 52
Decreasing a Key
min[H]
15 5 26 24 7 18 38
17 23 21 39 41
30 52
Decreasing a Key
F
2
F 5
60
T
6
20 7
T
8
T
10
30 11
T
12
18
*
14
CUT
decrease to 10
15 16
Decreasing a Key
CASCADING-CUTS
F F F F F F
*
10 12 10 8 6 2
16 15 18 30 11 20 7 4 5
Potential Difference
= [(t(H) + c) + 2(m(H)-c+2)]-[t(H)+2m(H)]
=4–c
FIB-HEAP-DELETE(H, x)
FIB-HEAP-DECREASE-KEY(H, x, -∞) => O(1)
FIB-HEAP-EXTRACT-MIN(H) => O(D(n))
end
LINK
T1 T2
T2
T1
•The root node with smaller key pays for the potential
•The root node of the resulting tree still carries a unit potential to
pay for a further link during the consolidate operation.
Analysis of the Potential Function
• Why the potential function includes the term
2m(H)?:
– When a marked node y is cut by a cascading cut its
mark bit is cleared so the potential is reduced by 2
– One unit pays for the cut and clearing the mark
field
– The other unit compensates for the unit increase
in potential due to node y becoming a root
Analysis of the Potential Function
• That is, when a marked node is cleared by a
CASCADING-CUT
t t 1
(t 1 2(m 1)) (t 2m) 1
m m 1
This unit decrease in potential pays for the cascading cut.
Note that, the original cut (of node x) is paid for by the actual cut.
Bounding the Maximum Degree
• Why do we apply CASCADING-CUT during
DECREASE-KEY operation?
• To maintain the size of any tree/subtree
exponential in the degree of its root node.
e.g.: to prevent cases where
size[x] = degree[x] + 1
Bounding the Maximum Degree
size[x] = 7 = degree[x] + 1
Bounding the Maximum Degree
x
size[x] = 24 = 16
10 8 14 29
28 13 17
10 8 14 29 size(x) = 8 ≥ Φ4 ≈ 6.5
28 13 17
2-3-4 Trees
• Multi-way Trees are trees that can have up to four
children and three data items per node.
• 2-3-4 Trees: features
– Are always balanced.
– Reasonably easy to program .
– Serve as an introduction to the understanding of B-Trees!!
• B-Trees: another kind of multi-way tree particularly
useful in organizing external storage, like files.
– B-Trees can have dozens or hundreds of children with
hundreds of thousands of records!
202
Introduction to 2-3-4 Trees
50
30 60 70 80
10 20 40 55 62 64 66 75 83 86
203
2-3-4 Trees
• The 2, 3, and 4 in the name refer to how many links to child
nodes can potentially be contained in a given node.
• For non-leaf nodes with at least one data item ( a node will not
exist with zero data items), the number of links may be 2, 3, or
4.
204
• Non-leaf nodes must/will always have one
more child (link) than it has data items (see
below);
– Equivalently, if the number of child links is L and the
number of data items is D, then L = D+1.
50
30 60 70 80
10 20 40 55 62 64 66 75 83 86
205
More Introductory stuff
50
30 60 70 80
10 20 40 55 62 64 66 75 83 86
30 2-node 60 70 80 4-node
10 20 40 55 62 64 66 75 83 86
209
2-3-4 Tree Organization
• Very different organization than for a binary tree.
• First, we number the data items in a node 0,1,2
and number child links: 0,1,2,3. Very Important.
210
More on 2-3-4 Tree Organization
A B C
Points to nodes w/keys < A ; Nodes with key between A and <B Nodes w/keys between B and < C Nodes w/keys > C
See below: (Equal keys not permitted; leaves all on same level; upper level nodes often not full; tree balanced!
Its construction always maintains its balance, even if you add additional data items. (ahead)
50 2-node
30 2-node 60 70 80 4-node
10 20 40 55 62 64 66 75 83 86
211
Searching a 2-3-4 Tree
• A very nice feature of these trees.
• You have a search key; Go to root.
• Retrieve node; search data items;
• If hit:
– done.
• Else
– Select the link that leads to the appropriate subtree
with the appropriate range of
values.
– If you don’t find your target here, go to next child.
(notice data items are sequential – VIP later)
– etc. Data will ultimately be ‘found’ or ‘not found.’
212
Try it: search for 64, 40, 65
50 2-node
30 2-node 60 70 80 4-node
10 20 40 55 62 64 66 75 83 86
213
So, how do we Insert into this Structure?
214
Node Split – a bit more difficult (1 of 2)
215
Node Split – Insertion: more difficult – 2 of 2
Upon encountering a full node (searching for a place to insert…)
1. split that node at that time.
2. move highest data item from the current (full) node into new node to
the right.
3. move middle value of node undergoing the split up to parent node
(Know we can do all this because parent node was not full)
4. Retain lowest item in node.
5. New node (to the right) only has one data item (the highest value)
6. Original node (formerly full) node contains only the lowest of the three
values.
7. Rightmost children of original full node are disconnected and connected
to
new children as appropriate
(They must be disconnected, since their parent data is changed)
New connections conform to linkage conventions, as expected.
8. Insert new data item into the original leaf node.
Note: there can be multiple splits encountered en route to finding the insertion
point. 216
Insert: Here: Split is NOT the root node.
Let’s say we want to add a 99 (from book)…
62
Want to add a data value of 99
Split this node…
74 87 89 97 112
217
Case 1 Insert: Split is NOT the root node
(Let’s say we want to add a 99 (from book)…)
2. 92 moves up to parent node. (We know it was not full)
62 92
3. 83 stays put
1. 104 starts a new node
218
If Root itself is full: Split the Root
219
•
Splitting on the Way Down
Note: once we hit a node that must be split (on the way
down), we know that when we move a data value ‘up’
that ‘that’ node was not full.
– May be full ‘now,’ but it wasn’t on way down.
• Algorithm is reasonably straightforward.
• Do practice the splits on Figure 10.7.
– Will see later on next exam.
// tree234.java
This is merely A data item stored at the
import java.io.*; nodes. In practice, this might be an entire
//////////////////////////////////////////////////////////////// record or object.
class DataItem Here we are only showing the key, where
{ the key may represent the entire object.
public int dData; // one data item
//--------------------------------------------------------------
public DataItem(int dd) // constructor
{
dData = dd;
}// end constructor
//--------------------------------------------------------------
public void displayItem() // display item, format "/27"
{
System.out.print("/"+dData);
} // end displayItem()
//--------------------------------------------------------------
221
} // end class DataItem
Following code is for processing that goes on
inside the Nodes themselves – not the entire
tree.
Not trivial code. Try to understand totally.
We will dissect carefully and deliberately!
222
class Node { This is what a Node looks like:
private static final int ORDER = 4; Note: two arrays: a child array and an item array.
private int numItems; 4 Note their size: nodes = 4; item = 3.
private Node parent; 3 The child array is size 4: the links: maximum children.
private Node childArray[] = new Node[ORDER]; The second array, itemArray is of size 3 – the
private DataItem itemArray[] = new DataItem[ORDER-1]; maximum number of data items in a node.
// ------------------------------------------------------------- numItems is the number of items in the itemArray.
parent will be used as a reference when inserting.
public void connectChild(int childNum, Node child) {// connect child to this node
childArray[childNum] = child;
if(child != null)
child.parent = this; One of three slides of code for class Node
}
// -------------------------------------------------------------
public Node disconnectChild(int childNum) {// disconnect child from this node, return it
Node tempNode = childArray[childNum];
childArray[childNum] = null;
return tempNode; Major work done by findItem(), insertItem() and
} removeItem() (next slides) for a given node.
// -------------------------------------------------------------
public Node getChild(int childNum)
{ return childArray[childNum]; } These are complex routines and NOT to be confused with
// ------------------------------------------------------------- find() and insert() for the Tree234 class itself. These are
public Node getParent() find() and insert() within THIS node.
{ return parent; }
// ------------------------------------------------------------- Recall: references are automatically initialized to null
public boolean isLeaf() and numbers to 0 when their object is created.
{ return (childArray[0]==null) ? true : false; }
So, Node doesn’t need a Constructor.
// -------------------------------------------------------------
public int getNumItems()
{ return numItems; } 223
public DataItem getItem(int index) // get DataItem at index
{ return itemArray[index]; }
Class Node (continued) // -------------------------------------------------------------
public boolean isFull()
{ return (numItems==ORDER-1) ? true : false; }
// -------------------------------------------------------------
public int findItem(int key) // return index of item (within node)
{
for(int j=0; j<ORDER-1; j++) // if found, otherwise return -1
{
if(itemArray[j] == null)
break;
Find routine: else
Looking for the data within if(itemArray[j].dData == key)
the node where we are located. return j;
}// end for
return -1;
} // end findItem
// -------------------------------------------------------------
public DataItem removeItem() { // removes largest item
Delete Routine // assumes node not empty
Saves the deleted item. DataItem temp = itemArray[numItems-1]; // use index, save item
Sets the location contents to null. itemArray[numItems-1] = null; // disconnect it
Decrements the number of items numItems--; // one less item
at the node. return temp; // return item
Returns the deleted data item. }
// -------------------------------------------------------------
public void displayNode() { // format "/24/56/74/"
for(int j=0; j<numItems; j++)
itemArray[j].displayItem(); // "/56"
System.out.println("/"); // final "/" 224
}
// -------------------------------------------------------------
Class Node (continued) public int insertItem(DataItem newItem)
{ // assumes node is not full
Insert Routine numItems++; // will add new item
Increments number of items in node. int newKey = newItem.dData; // key (int value) of new item
Get key of new item.
Now loop.
for(int j=ORDER-2; j>=0; j--) // start on right to examine data
{ // looking for spot to insert
226
class Tree234App {
public static void main(String[] args) throws IOException {
long value;
This code is merely the interface for a client. Tree234 theTree = new Tree234();
theTree.insert(50);
Pretty easy to follow. theTree.insert(40);
theTree.insert(60);
theTree.insert(30);
All the complex processing is undertaken at the theTree.insert(70);
//-------------------------------------------------------------
public static int getInt() throws IOException
{
String s = getString();
return Integer.parseInt(s);
}
//-------------------------------------------------------------
} // end class Tree234App 228
class Tree234 {
The object of type Tree234 IS the entire tree.private Node root = new Node(); // make root node
Note: tree only has one attribute, its root. public int find(int key) {
This is all it needs. Node curNode = root;
int childNumber;
while(true) {
if(( childNumber=curNode.findItem(key) ) != -1)
return childNumber; // found; recall findItem returns index
else if( curNode.isLeaf() )
return -1; // can't find it
Finding, Searching, and Splitting algorithms else // search deeper
curNode = getNextChild(curNode, key);
are all shown here. } // end while
}// end find()
public void insert(int dValue) { // insert a DataItem
Node curNode = root;
DataItem tempItem = new DataItem(dValue);
while(true)
Call split routine {
if( curNode.isFull() )
{
split(curNode); // call to split node
See creation of additional two nodes curNode = curNode.getParent(); // back up // search onc
curNode = getNextChild(curNode, dValue);
} // end if(node is full)
else if( curNode.isLeaf() ) // if node is leaf, insert data
break;
else // node is not full, not a leaf, so go to lower level
curNode = getNextChild(curNode, dValue);
} // end while 229
public void split(Node thisNode) { // split the node // assumes node is fu
Class Tree234 DataItem itemB, itemC; // When you get here, you know you need to split…
Node parent, child2, child3;
(continued) int itemIndex;
itemC = thisNode.removeItem(); // remove items from this node.
itemB = thisNode.removeItem(); // Note: these are second and third items
child2 = thisNode.disconnectChild(2); // remove children These are rightmos
child3 = thisNode.disconnectChild(3); // from this node two children
Node newRight = new Node(); // make new node
if(thisNode==root) // if the node we’re looking at is the root,
Be careful in here.
{
Remember, the middle value is
root = new Node(); // make new root
moved to parent and must be parent = root; // root is our parent
disconnected (and made null). root.connectChild(0, thisNode); // connect to parent
}
Rightmost element is moved into else // this node to be split is not the root
new node as leftmost data item. parent = thisNode.getParent(); // get parent
// deal with parent
Review the process and then itemIndex = parent.insertItem(itemB); // item B to parent
note the code that implements int n = parent.getNumItems(); // total items?
the process. for(int j=n-1; j>itemIndex; j--)
{ // move parent's connections
Node temp = parent.disconnectChild(j); // one child to the right
parent.connectChild(j+1, temp);
}
parent.connectChild(itemIndex+1, newRight); // connect newRight to parent
// deal with newRight
newRight.insertItem(itemC); // item C to newRight
newRight.connectChild(0, child2); // connect to 0 and 1 230
newRight.connectChild(1, child3); // on newRight
// gets appropriate child of node during search for value
Class Tree234 public Node getNextChild(Node theNode, int theValue) {
int j;
(continued) // assumes node is not empty, not full, not a leaf
int numItems = theNode.getNumItems();
for(j=0; j<numItems; j++) // for each item in node
// are we less?
if( theValue < theNode.getItem(j).dData )
return theNode.getChild(j); // return left child
// end for // we're greater, so
return theNode.getChild(j); // return right child
}// end getNextChild()
// -------------------------------------------------------------
public void displayTree()
{ recDisplayTree(root, 0, 0); }
// -------------------------------------------------------------
private void recDisplayTree(Node thisNode, int level, int childNumber) {
System.out.print("level="+level+" child="+childNumber+" ");
thisNode.displayNode(); // display this node
// call ourselves for each child of this node
int numItems = thisNode.getNumItems();
for(int j=0; j<numItems+1; j++)
{
Node nextNode = thisNode.getChild(j);
if(nextNode != null)
recDisplayTree(nextNode, level+1, j);
else
return;
}
} // end recDisplayTree()
// -------------------------------------------------------------\ 231
} // end class Tree234
Efficiency Considerations for 2-3-4 Trees
• Searching:
• 2-3-4 Trees: one node must be visited, but
– More data per node / level.
– Searches are fast.
• recognize all data items at node must be checked in a 2-3-4 tree,
• but this is very fast and is done sequentially.
• All nodes in the 2-3-4 tree are NOT always full.
• So, the search times for a 2-3-4 tree and for a balanced binary
tree are approximately equal and both are O(log2n)
232
Efficiency Considerations for 2-3-4 Trees
• Storage
• 2-3-4 Trees: a node can have three data items and
up to four references.
• Can be an array of references or four specific
variables.
• IF not all of it is used, can be considerable waste.
• In 2-3-4 trees, quite common to see many
nodes not full.
233