0% found this document useful (0 votes)
29 views

Week 12

The document describes the Bellman-Ford algorithm for finding shortest paths in a graph with negative edge weights. It begins with an introduction of the algorithm and an example graph with negative edges. It then shows the steps of the algorithm on the example graph, including initializing a distance table and updating distances over multiple passes. The document also explains that Bellman-Ford can detect negative cycles in a graph. It concludes with discussing the time complexity of the algorithm and representing difference constraints as a graph that can be solved using Bellman-Ford.

Uploaded by

Satish Gandham
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Week 12

The document describes the Bellman-Ford algorithm for finding shortest paths in a graph with negative edge weights. It begins with an introduction of the algorithm and an example graph with negative edges. It then shows the steps of the algorithm on the example graph, including initializing a distance table and updating distances over multiple passes. The document also explains that Bellman-Ford can detect negative cycles in a graph. It concludes with discussing the time complexity of the algorithm and representing difference constraints as a graph that can be solved using Bellman-Ford.

Uploaded by

Satish Gandham
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

33 Bellman-Ford Algorithm

33.1 Introduction
• recall Dijkstra’s algorithm
finds minimal weight path from a single source
to all other vertices in a graph or digraph
• Dijkstra will not work for graphs with negative edges
explain that negative edges may be possible and reasonable
• Bellman-Ford will work for graphs with negative edges

• it will also detect negative circuits which should never appear


explain with an example:
three vertices: s, a, b
three edges: w(s, a) = 1, w(a, b) = −2, w(b, a) = −1
show that we can keep cycling around:
s, a, b, a, b, . . . to get ever lower values of d(a)

33.2 An example
• develop algorithm using the following working example
• same example as shown on slides but there are errors there

uy zu
a 5 b

@ -2 6
@
6 @
@ -3

s uP
8 @
i @ 7
@ PPP @
@ PP
PP @ -4
@ PP 2 @
7 @ PP
PP @
@Ru
@ ? PP@
Pu
R
-
c 9
d
33.3
• use a table to show changes in estimates of distances and predecessors
• initialize table — no predecessors

ds da db dc dd
0 ∞ ∞ ∞ ∞

33.4 Revise estimates of distances


• repeat the following v − 1 times

• for each edge (u, v) in the graph,


set d(v) = min[d(v), d(u) + w(u, v)]
• if a distance is revised, note new predecessor vertex
• edges can be taken in any order
we will use alphabetical order
(a, b), (a, c), (a, d), (b, a), (c, b), . . . , (s, b)

75
ds da db dc dd
Initially 0 ∞ ∞ ∞ ∞

After Pass 1 0 6, s 7, s

After Pass 2 0 4, c 2, a

After Pass 3 0 2, b

After Pass 4 0 −2, a

• show how we can use predecessor information to trace paths from source

33.5 Check for negative loops


• at end, for each edge (u, v)
check to see if d(v) − d(u) > w(u, v)
• if so, we have a problem
try to convince them with an example:
show diagram when d(u) = 3, d(v) = 6, w(u, v) = 2
• in such a case, algorithm returns the value false

33.6 Complexity considerations


• initialization: O(v)
• main loop: O(ve)
• checking for negative loops: O(e)
• total: O(ve)

33.7 Study of attitudes to course


• leave class to Michelle Craig
• she will be running an informal session examining
what is good about the course
what is bad about the course
• not a replacement for departmental surveys

76
34 Topological Sorting and Difference Constraints
34.1 Partial ordering
• examine the idea of a partial order briefly
• for a partial order ≺, we must have
∀a, a 6≺ a (irreflexive)
if a ≺ b and b ≺ c, then a ≺ c (transitive)
• an example
proper subsets

• note that not all elements need be related


show with proper subsets

34.2 Topological sorting


• not like regular sorting
• here we want to impose a linear order on a partial order
• as an example, course prerequisites
some courses may have to be taken before others
• can illustrate with a DAG (explain)
• use example of CS courses
Intro (I), Data Structures (DS), Artificial Intelligence (AI)
Programming Languages (PL), Compilers (C), Operating Systems (OS)
I ≺ DS
I ≺ AI
I ≺ PL
DS ≺ C
PL ≺ C
C ≺ OS
DS ≺ OS

• how can we take courses if we only do one at a time?


• solve this problem with a topological sort

34.3 Using DFS on a DAG


• start at a node with indegree zero
obviously, there must be at least one
• from here we can find a node that could be visited last
it must have outdegree zero
• if we use a DFS starting at source
we will eventually come to such a node
• then back up, if necessary and continue DFS

• slides suggest we can keep track of things using following algorithm


DFS (vi )
visit vi , mark as visited
arrival = ++time
for all unvisited vertices vk adjacent from vi
DFS (vk )
departure = ++time

77
• vertex that has lowest departure time can be last in topological sort
why? — explain

• other vertices precede this — in reverse order of departure times

34.4 Linear programming


• chat briefly
• mention simplex method

34.5 Difference constaints


• a specialized form of constraint
always has the form xi − xj ≤ bk

• use example on slides:


x1 − x2 ≤ 0
x1 − x5 ≤ −1
x2 − x5 ≤ 1
x3 − x1 ≤ 5
x4 − x1 ≤ 4
x4 − x3 ≤ −1
x5 − x3 ≤ −3
x5 − x4 ≤ −3
• we can write this in matrix form:
   
1 −1 0 0 0 0
1 0 0 0 −1   −1
  x1  
0 1 0 0 −1  
   x2   1 
−1 0 1 0 0   
   5
 x3  ≤  
−1 0 0 1 0     4
  
 x 
0 0 −1 1 0 4 −1
  x5  
0 0 −1 0 1 −3
0 0 0 −1 1 −3

34.6 Using a constraint graph


• instead of trying to solve the system algebraically,
we can use a constraint graph

• constraint graph has a vertex vi for each xi


plus another vertex v0
• for i, j > 0, an edge (vi , vj ) exists iff xj − xi ≤ bk is a constraint
in such a case, the edge has weight wij = bk
• in addition, there are edges from v0 to all other vertices
all of these have weight zero

t t
v1 0 v2
P P 1

6 @ PP  
-1 @ PPP
0  PP 5
0 @ PP
t  0 q tv
 @ 1 PP

v0 P P
-
@PPP
@
  3
@4 
@ 0 PPP
-3
P @

@ P 
PPP@
0 -1
R t
@
@ )
 PR t
@
q
P
v5 -3 v4

78
• to solve the system of inequalities, we apply Bellman-Ford to the graph
using v0 as the initial vertex

• show the result for this graph:


d(v0 , v1 ) = −5
d(v0 , v2 ) = −3
d(v0 , v3 ) = 0
d(v0 , v4 ) = −1
d(v0 , v5 ) = −4
• this gives a solution to the problem:
(x1 , x2 , x3 , x4 , x5 ) = (−5, −3, 0, −1, −4)
• why does it work?
no formal proof but . . .

• consider two nodes vi and vj


connected by an edge (i, j) with weight wij
let di = d(v0 , vi )
 
wij - dj
 
di

• now we know that dj − di ≤ wij


because otherwise we could make a shorter path from v0 to vj
by going from v0 to vi and then along (i, j)
(as we noted previously in developing the Bellman-Ford algorithm)
• but this gives us values for the relationship we want:
xj − xi ≤ (i, j)

• for a formal proof, see


Corman, Leiserson, Rivest, Stein:
Introduction to Algorithms

34.7 Complexity considerations


• recall that Bellman-Ford runs in O(ve) time
• here, with n unknowns and m constraints,
we need a graph with n + 1 vertices and n + m edges
• time is O((n + 1)(n + m)) or O(n2 + nm)

79
35 Hashing
35.1 Introduction
• a way of implementing the ADT dictionary
with operations
insert
search
delete
• excluding operations like
sort
largest
etc.
• hashing can provide a method that is O(1), (if we are careful)

• basic idea:
a place for everything and everything in its place
• analogy with students in class
insertion — people always choose the same seats
search — to see if somebody is in class, I only look in one place

35.2 Hash functions


• to achieve our goals, we could use
a numerical key for each item to be stored
store each item in an array whose size is equal to the range of keys
• probably not practical for most situations
key range is usually too big
e.g. student numbers, SIN’s
• hashing applies a function to a key that produces a smaller range
for an item with key k, store it at h(k) where 0 ≤ h(k) < m
• give an example of insertion using hashing
class with student number, k, as key
classroom with 117 seats, numbered from 0 to 116
hash function might be h(k) = k mod 117
assign student k to seat number h(k)
this is known as the division method
• for retrieval, to see if a student is in the room
use student number, n
look in seat number h(n)
• possible problem
we could have k1 6= k2 but h(k1 ) = h(k2 )
this is known as a collision
• a good hashing function will minimize collisions
a perfect hash function would eliminate them entirely
(but this is usually impossible)
• assuming something less than perfection
we must have a way of resolving collisions

80
35.3 Choosing a good hash function
• ideally, we would like to have each key equally likely to hash into any location
• not always possible but we can avoid some pitfalls
• sometimes plain keys show patterns
e.g. student numbers: 99. . .
e.g. Ontario driver’s licences: last six digits are birthdate
we sometimes use only some digits for key
• sometimes no numerical key
can convert characters to digits

• if we are using the division method, we must be careful of the array size
use a prime to avoid patterns
• to show why, consider using an array of size 100
now h(k) would depend only on last two digits of k
for driver’s licence, hash code would depend only on day of birth 01 to 31

35.4 Resolving collisions by separate chaining


• here we use an array size, m, that may be smaller than the number of keys n
• each entry in the array is a pointer to a linked list
keys and data form elements of the linked lists — called chains

• do an example
use an array of size 7
use h(k) = k (mod 7)
insert a sequence of values including some collisions
• usually we organize the linked lists as LIFO lists
this makes insertions easy
• if the lengths of the chains are kept fairly short
searching and deleting can be done quickly using a sequential search

35.5 Resolving collisions by open addressing


• another approach to collision resolution uses a large array
array size m is quite a bit larger than the number of keys, n
• can still get collisions
a variety of techniques to resolve them

35.6 Course evaluations


• get volunteers to handle this

81

You might also like