0% found this document useful (0 votes)
70 views

NOI - PH Training: Week 4: Jared Guissmo Asuncion

The document discusses algorithms that use the divide-and-conquer paradigm to solve problems more efficiently. It describes modular exponentiation, which can compute bn (mod m) in O(log n) time instead of O(n) time by recursively dividing the problem in half at each step. Binary search is also discussed, which can find the index k where a function p changes from false to true by cutting the search space in half at each step. Pseudocode and Python implementations are provided for both algorithms.

Uploaded by

Himanshu Ranjan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

NOI - PH Training: Week 4: Jared Guissmo Asuncion

The document discusses algorithms that use the divide-and-conquer paradigm to solve problems more efficiently. It describes modular exponentiation, which can compute bn (mod m) in O(log n) time instead of O(n) time by recursively dividing the problem in half at each step. Binary search is also discussed, which can find the index k where a function p changes from false to true by cutting the search space in half at each step. Pseudocode and Python implementations are provided for both algorithms.

Uploaded by

Himanshu Ranjan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

NOI.

PH Training: Week 4

Jared Guissmo Asuncion


2
Contents

1 Divide and Conquer 5

1.1 Modular Exponentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2 Binary Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.3 Merge Sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.4 Longest Increasing Subseqence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.4.1 An O (n 2 ) solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.4.2 A faster solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2 Number Theory 15

2.1 Modulo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.2 Prime Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3 Combinatorics 19

4 Exercises 21

4.1 Required Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.2 Optional Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3
4 CONTENTS
Chapter 1

Divide and Conquer

The divide-and-conquer paradigm is a useful technique in solving certain problems. The idea of this paradigm is simple. Suppose

we have a problem whose input size is N . Then, we must

• find a way to split the big problem of size N into two smaller subproblems of size N /2, and

• find a way to obtain a solution for the big problem using the solutions of two smaller subproblems.

1.1 Modular Exponentiation

A good first example is modular exponentiation, but first we need some facts about integer division.

Theorem 1.1 (division algorithm in Ú). Let a, b ∈ Ú with b > 0. Then, there exists unique integers q and r with a = bq + r ,

such that 0 ≤ r < b .

Remark 1.2. From theorem 1.1, we are sure that the remainder r when a is divided by m satisfies 0 ≤ r < m .

Notation 1.3. We write a (mod m ) to denote the remainder when a is divided by m .

We are now ready to state the problem state the problem.

Problem 1.4. Let b, x, m be integers with n non-negative and m > 0. Compute for b n (mod m ).

The naïve approach to this problem is to solve

b, b 2, b 3, b 4 , . . . , b n (mod m ).

However, this will take O (n ) multiplications1 . However, we can do better. For simplicity, we first assume that n is a power of 2,

say n = 2x . We then make the trivial observation that

b n ≡ b n/2 · b n/2 (mod m ).

2 2
We have divided n into two parts. We can do it again. We have that b n/2 ≡ b n/2 · b n/2 mod n . Continuing, we eventually reach

= b 2 ≡ b · b (mod m ). This means, that to solve b n = b 2 (mod m ), we just need to solve


x −1 x
b n/2

2
b, b 2, b 2 , . . . , b 2 (mod m ).
x

1
Very very informally speaking, O (n ) means that you will take around cn multiplications where c is some constant.

5
6 1.2 Binary Search

These can be solved by O (x ) squarings (or x multiplications). Take note that x = log2 n . And so, O (log n ) multiplications is

clearly an improvement over the naïve approach which neeeds O (n ) multiplications. However, we are not yet done. What if n is

not a power of 2? What if it’s odd? If we assume that n is odd, then we can write it as n = 2k + 1 where k is an integer. Thus,

we have

b n ≡ b 2n+1 ≡ b 2n · b (mod m ).

But take note that we can do the divide and conquer thingy with b 2n since it’s equal to b n · b n . Hence, just like the approach
2
we just finished talking about, we will only ever need to compute for b, b 2 , b 2 , . . . (mod m ). Hence, we will also need to make

O (log n ) multiplications. Finally, we state the algorithm in an organized manner:

Algorithm 1.5 (modular exponentiation). modpow(b, n, m)


Input b , n , m integers with n non-negative and m > 0.

Output b n (mod m ).

1. If n is 0, return 1 (mod m ).

2. If n is 1, return b (mod m ).

3. If n ≥ 2:

(a). Solve for x = modpow(b, n/2, m). Here, / means integer division.

(b). If n is even, return x 2 (mod m ).

(c). If n is odd, return x 2 · b (mod m ).

Implementation 1.6. Here is an implementation of algorithm 1.5.

modpow( b , n , m) :

i f n == 0 :

return 1

i f n == 1 :

return b

x = modpow( b , n / 2 , m) ;

i f n%2 == 0 :

r e t u r n ( x * x)%m

else

r e t u r n ( b* x * x)%m

1.2 Binary Search

Another generic problem-solving technique that uses the divide-and-conquer paradigm is the binary search method. We state

the following problem:

Problem 1.7. Let p (x ) : {0, 1, . . . , n − 1} → { true, false} be a function such that

 false if i < k




p (i ) =  (1.8)
 true if i ≥ k .




Given the array and a way to determine the answer to p (j ) for any j , determine k .
Divide and Conquer 7

The naïve approach to this problem is to go through each element of the array from left to right and check if p (x ) =true. This

means that you will have to call p at most n times! Not only is this inefficient, but it does not utilize the fact that the special

form of the function. Now, how do we abuse this special property of f ? We make the simple observation that:

Observation 1.9. If p (j ) is true, then k ≤ j and if p (j ) =false, then j < k .

This means that if we choose j such that x j is the middle element of the array, we essentially cut our search space in half! And

so, we have the following algorithm:

Algorithm 1.10 (binary search). binsearch(a, b, p)


Input integers a, b ∈ {0, 1, . . . , n − 1} and a function p satisfying 1.8.

Output the lowest index a ≤ k ≤ b such that p (k ) =true.

1. If a > b , then k does not exist.

2. If a = b , check if p (a ) = true.

• If it is true, then return a .

• If it is false, then k does not exist.

3. Determine an integer j such that `j − a` and `b − j ` differ by at most 1.

4. Check if p (x j ) =true.

• If it is true, then a ≤ k ≤ j . Return binsearch(arr, a, j, p).

• If it is false, then j < k ≤ b . Return binsearch(arr, j+1, b, p).

Now, let’s turn this algorithm into code. Note that we can let j = (a + b)/2, this is the average of a and b . However, a nasty
test case might force a + b to overflow. However, there is no need to worry because by the power of mathematics, we have that
a +b 2a b − a b−a
j = = + = a+ .
2 2 2 2

Math has saved the day! We now conclude that the safer option is to take j = a+(b-a)/2. Indeed, if a and b are non-negative
integer whose value is at most MAX_INT, then b − a will also be at most MAX_INT. Unlike God’s love, your computations will
not be overflowing.

Remark 1.11. Algorithm 1.10 requires O (log n ) evaluations of p .

Implementation 1.12. Here is an implementation of 1.10.

binsearch ( lo , hi , p ) :

while l o < h i :

mid = l o + ( h i - l o ) / 2

i f p ( mid ) == true :

h i = mid

else :

l o = mid +1

i f p ( l o ) == true :

return lo

r e t u r n no_answer

Example 1.13. As problem 1.7 is stated rather generally, here are a few example problems wherein you can apply this algorithm.
8 1.3 Merge Sort

1. Given a sorted array of integers x 0 ≤ x 1 ≤ . . . ≤ x n−1 , find the smallest integer in the array whose value is at least k .

Solution: Let p (i ) = (x i ≥ k ). Note that this satisfies 1.8. Then the answer is x binsearch(0,n-1,p) .

2. Given a sorted array of integers x 0 ≤ x 1 ≤ . . . ≤ x n−1 , determine if the integer k is on the list.

Wrong solution: Let p (i ) = (x i = k ). This does not satify 1.8.

Solution: Let p (i ) = (x i ≥ k ). Note that this satisfies 1.8. Say yes if and only if x binsearch(0,n-1,p) = k .

3. Given a sorted array of integers x 0 ≤ x 1 ≤ . . . ≤ x n−1 , find the largest index i such that x i = k .

Solution: Let p (i ) = (x i > k ). Note that this satisfies 1.8. Let j = binsearch(0,n-1,p). If j > 0, return j − 1 if and
only if x j −1 = k . Otherwise, k does not occur.

4. Given a dictionary with words listed in alphabetical order, determine if the word banana is there.

5. Given a list of (not necessarily sorted) list of integers x 0 , . . . , x n−1 whose first k entries are composite numbers and the

rest are prime, find the first prime on the list.

Solution: Let p (i ) = (x i is prime). Note that this satisfies 1.8. Then the answer is x binsearch(0,n-1,p) . Note that evaluating

p is costly in this case. If we used the naive approach, we would be doing it O (n ) times. Here, we do it O (log n ) times.

6. Given a polygon X completely contained in the first quadrant of the 2D cartesian plane, find the line which divides X into

2 equal parts.
Solution: Let p (m ) = (the area of X on the ‘left side’ of the line y = mx is ≥ k ). Let a and b be such that X is com-

pletely on the right side of y = ax and on the left side of y = bx . Return m binsearch(a,b,p) . Note that the domain of p

is not the finite set {0, 1, . . . , n − 1} anymore but instead [a, b] ⊆ Ò. Can you formulate a new version of 1.8 that adapts

to this problem?

1.3 Merge Sort

One of the fastest sorting algorithms follows the divide-and-conquer paradigm. It is more commonly known as merge sort. But

before we can state the problem of sorting in full generality, we define the notion of a totally-ordered set.

Definition 1.14. A totally ordered set (S, ≤ ) is a set S equipped with a comparison function ≤ on X such that for any a, b, c ∈ S ,

the following conditions hold:

1. reflexivity: a ≤ a

2. antisymmetry: if a ≤ b and b ≤ a , then a = b

3. transitivity: if a ≤ b and b ≤ c , then a ≤ c

4. trichotomy: exactly one of the following is true: a ≤ b or b ≤ a

Example 1.15. If (X , ≤ ) is a totally-ordered set, then we can define a comparison function ≥ on X such that x ≥ y if and only

if y ≤ x . Then (X , ≥ ) is a totally-ordered set.

Example 1.16. If (X , ≤ ) and (Y , ≤ ) are totally-ordered sets, then we can define a comparison function ≤ on X × Y as follows:

For any a = (x 0 , y 0 ), b = (x 1 , y 1 ) ∈ X × Y , we say that a ≤ b if one of the following conditions is satisfied:

1. x 0 ≤x 1 , or
Divide and Conquer 9

2. x 0 = x 1 and y 0 ≤y 1

Then (X × Y , ≤ ) is a totally-ordered set.

Example 1.17. Consider the set

C = {A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z}.

Let f : C → {65, . . . , 90} such that f (A) = 65, f (B) = 66, etc. Define ≤ to be a comparision function on C such that for each

a, b ∈ C , a ≤ b if and only if f (a ) < f (b ) (as integers).

Example 1.18. Let S be a set and consider (P (S ), ⊆ ), where P (S ) is the set of subsets of S and ⊆ is the usual subset comparison

function. Note that this is not a totally-ordered set. To show this, consider the set of S = {0, 1}. Note that {0} and {1} are subsets

of S (and thus elements of P (S )). However, observe that {0} * {1} and {1} * {0} and this violates the trichotomy property in

definition 1.14. Indeed, it is not a totally-ordered set. However, such sets which do not necessarily have the trichotomy property

but satisfy the three other properties are called partially-ordered sets.

Definition 1.19. If (X , ≤ ) is a totally ordered set in which X is finite, then there exists s and t such that for all x ∈ X , we have

that

s ≤x and x ≤ t.

We call s the smallest element of X and t to be the largest element of X . We denote s by min(X ) and t by max(X ).

Notation 1.20. When the context is clear, we usually drop the ≤ and write just the name X of the set when we talk about a

totally-ordered set (X , ≤ ).

Problem 1.21. Let (X , ≤ ) be a totally-ordered set. Let x 0 , . . . , x n−1 ∈ X . Find a one-to-one function s : {0, . . . , n − 1} →

{0, . . . , n − 1} such that


x s (0) ≤ x s (1) ≤ · · · ≤ x s (n−1) .

The divide step of the merge sort algorithm, as usual, involves splitting the array into two parts. The much more interesting step

is the combine step (or the merge step). Here, we assume that we already have two sorted arrays S 1 and S 2 :

S1 S2

Suppose we want S to be the final sorted array. As both subarrays S 1 and S 2 are sorted, then min(S ) will either be min(S 1 ) or

min(S 2 ), depending on which one is smaller2 . After figuring out the first element of S (i.e. the smallest), we remove it from its

respective subarray. Now, we take the minimum between min(S 1 ) and min(S 2 to find the second (smallest) element of S . And

so on.

Algorithm 1.22 (merge sort). mergesort(a, b, f) Fix x 0, . . . , x n−1 ∈ X , where X is a totally-ordered set.
Input a, b ∈ {0, . . . , n −1}

Output a one-to-one function s (x ) : {a, . . . , b } → {a, . . . , b } such that x s (i ) ≤ x s (j ) if and only if i ≤ j .

1. If a = b , then let s (a ) = a .

2. Determine an integer j such that `j − a` and `b − j ` differ by at most 1.

3. Let s 1 = mergesort(a, j) and s 2 = mergesort(j+1, b).


2
We break ties by comparing the original index, which are both integers. In this case, we take min(S 1 )
10 1.4 Longest Increasing Subseqence

4. Let i 1 = a , i 2 = j + 1.

5. For k = a, a + 1, . . . , b :

• If i 2 > b , set s (k ) = s (i 1 ) and increment i 1 by 1.

• If i 1 > j , set s (k ) = s (i 2 ) and increment i 2 by 1.

• If x s 1 (i 1 ) ≤ x s 2 (i 2 ) , then set s (k ) = s (i 1 ) and increment i 1 by 1.

• Otherwise, set s (k ) = s (i 1 ) and increment i 2 by 1.

6. Return s .

Remark 1.23. Algorithm 1.22 requires O (n log n ) comparisons. This is easy to see if n = 2x . because you will have to call the

function with an array of size n one time, with an array of size n/2 two times, an array of n/4 four times, and so on until you

arrive at an array of n/2x = 1 and you have to deal with n elements at each level. This means that the depth of the recursion

will be log n . Moreover, at each depth you will need to compare n times3

Implementation 1.24. Here is an implementation of 1.22.

mergesort ( lo , hi , s ) :

i f l o == h i :

return [ lo ] ;

mid = l o + ( h i - l o ) / 2

s 1 = mergesort ( lo , mid )

s2 = mergesort ( mid + 1 , h i )

i 1 = lo

i 2 = mid +1

arr = [ ] ;

f o r k = lo , . . . , hi :

i f i2 > hi :

a r r . append ( s 1 [ i 1 ] )

i 1 ++

i f i 1 > mid :

a r r . append ( s2 [ i 2 ] )

i 2 ++

i f x [ s 1 [ i 1 ] ] <= x [ s2 [ i 2 ] ] :

a r r . append ( s 1 [ i 1 ] )

else :

a r r . append ( s2 [ i 2 ] )

1.4 Longest Increasing Subseqence

Suppose a 1 , a 2 , . . . , a n are from a totally-ordered set (X , ≤ ). We call this a finite sequence S in X of length n . A subsequence

of S is obtained by removing some (possibly none, possibly all) of the terms of S . For people who like formal definitions:

Definition 1.25. Let S = (a 1 , a 2 , . . . , a n ) be a sequence. Let ` be a non-negative integer such that ` ≤ n . Consider i 1 , . . . , i `

such that

1 ≤ i 1 < i 2 < · · · < i ` ≤ n.


3
For example, since you will call the function 2i times with an input array of size n/2i , this batch of calls will require about 2i · n/2i = n comparisons.
Divide and Conquer 11

Then,

ai1 , ai2 , . . . , ai`

is a subsequence of S . Note that if ` = 0, we end up with an empty sequence. For our purposes, we consider this a subsequence

of S .

Optional Exercise 1.26. How many subsequences does a sequence of length n have? Assume that all the terms are distinct for

simplicity.

We also define the following types of sequences:

Definition 1.27. Let S = (a 1 , a 2 , . . . , a n ) be a sequence in a totally-ordered set (X , ≤ ). We write a < b to denote that a ≤ b

but a , b .

• If

a1 < a2 < · · · < an

then S is said to be a (strictly) increasing sequence.

• If

a1 ≤ a2 ≤ · · · ≤ an

then S is said to be a non-decreasing sequence.

• If

a1 > a2 > · · · > an

then S is said to be a (strictly) decreasing sequence.

• If

a1 ≥ a2 ≥ · · · ≥ an

then S is said to be a non-increasing sequence.

We are now ready to pose the main problem of this section:

Problem 1.28. Given a sequence S in a totally-ordered set X , find the length of the longest increasing subsequence of S .

Optional Exercise 1.29. One naïve solution to problem 1.28 has complexity O (2n ). Figure it out.

1.4.1 An O (n 2 ) solution

One solution to problem 1.28 is by using dynamic programming. Here’s the solution! Spoiler alert. Let ` (j ) be the length of the

longest increasing subsequence ending in a j . Now, if we know ` (i ) for each i < j , then we will be able to solve ` (j ) as follows:

` (j ) = max(S j ) (1.30)

where

S j = {` (i ) + 1 : i < j and a i < a j } ∪ {1}.

This is because if a k < a m , then we can add a m to the longest increasing subsequence ending in a k . With this idea, we are now

ready to discuss the algorithm.


12 1.4 Longest Increasing Subseqence

Algorithm 1.31 (longest increasing subsequence). lisdp(a, n)


Input a sequence S = (a 1 , . . . , a n ) in a totally-ordered set X of length n

Output a function ` such that ` (i ) is the length of the longest increasing subsequence whose last term is a i , a function p (i ) such

that a p (i ) is the penultimate term of the longest increasing subsequence whose last term is a i , and a sequence (s 1 , . . . , s m ), a

longest increasing subsequence of S .

• Set ` (1) = 1 and p (i ) = −1.

• Set m = 1;

• For each j = 2, . . . , n :

– Set ` (j ) = 1.

– For each i = 1, 2, . . . , j − 1:

* If a i < a j and ` (i ) + 1 > ` (j ), then let ` (j ) = ` (i ) + 1 and set p (j ) = i . This solves 1.30.

– If ` (j ) > ` (m ), then set m = j .

• Let t = ` (m ).

• Let k = m .

• While k , −1:

– Let s t = a k .

– Decrease t by 1.

– Let k = p (k ).

Implementation 1.32. Here is an implementation of 1.31.

lisdp (a , n ) :

l [1] = 1

p [ 1 ] = -1

max = 1 ;

for j = 2 , . . . , n :

l[j] = 1

for i = 1 , . . . , j-1:

i f a [ i ] < a [ j ] and l [ i ] + 1 > l [ j ] :

l [ j ] = l [ i ]+1

p[ j ] = i

i f l [ j ] > l [ max ] :

max = j

t = l [m]

k = m

while k ! = - 1 :

s[ t ] = a[k]

t --

k = p[k]

Note that there are two nested for loops. This means that whatever’s done in the innermost loop will be repeated n (n + 1)/2 =

O (n 2 ) times. Finally, constructing the sequence will take at most n = O (n ) operations. Hence, much of the computations in the
algorithm are done in the nested for loops and hence the algorithm runs in O (n 2 ) time.
Divide and Conquer 13

1.4.2 A faster solution

While the O (n 2 ) dynamic programming solution is already impressive, there is an even faster solution. In this new solution, we

replace the inner for-loop by something that takes O (log n ) time. In algorithm 1.31, we are essentially storing all longest increas-

ing subsequences that end in each of the a i . Hence, we may have been storing more than one longest increasing subsequences

of length r , for example. Our new idea is that we instead store one longest increasing subsequence of length r – the one whose

last term is minimal.

Definition 1.33. We define S j ,r to be the longest increasing subsequence of a 1 , a 2 , . . . , a j of length r whose last term is minimal.

We denote by L j ,r to be the last term of S j ,r .

Claim 1.34. For a fixed j , we have L j ,1 ≤ L j ,2 ≤ . . . ≤ L j ,r j (where r j = max{` (1), . . . , ` (j )}).

Proof. We prove by contradiction. Suppose there exists a < b such that L j ,a > L j ,b . Exercise: Find a contradiction. 

Now, if we have the sequences S j −1,1 , . . . , S j −1,r , whose last terms are L j −1,1 , . . . , L j −1,r , then we know that we can replace

with a j the largest L j −1,t such that a j < L j −1,t . If this does not exist, then we append a j to the longest increasing subsequence

we have so far. We acknowledge that this is a bit too much to take in, and so here is an animation available on Wikipedia

demonstrating what we have just said. And so, the new and faster algorithm goes as follows:

Algorithm 1.35 (longest increasing subsequence). lis(a, n)


Input a sequence S = (a 1 , . . . , a n ) in a totally-ordered set X of length n

Output a function ` such that ` (i ) is the length of the longest increasing subsequence whose last term is a i , a function p (i ) such

that a p (i ) is the penultimate term of the longest increasing subsequence whose last term is a i , and a sequence (s 1 , . . . , s m ), a

longest increasing subsequence of S .

• Set ` (1) = 1 and p (i ) = −1.

• Set m = 1;

• Let L be a list (of indices). Add 1 to this list.

• For each j = 2, . . . , n :

– Use binary search to find the largest index r such that a j ≤ a L (r ) .

– If r exists:

* Set p (j ) = p (L (r )).

* Set L (r ) = j .

– If r does not exist:

* Set p (j ) = L (m ), where m is the length of L .

* Set Append j to L .

• Let t = ` (m ).

• Let k = m , where m is the length of L .

• While k , −1:

– Let s t = a k .
14 1.4 Longest Increasing Subseqence

– Decrease t by 1.

– Let k = p (k ).

Implementation 1.36. Here is an implementation of 1.35.

lis (a , n ) :

p [ 1 ] = -1

L = [1]

for j = 2 , . . . , n :

m = length ( L )

r = binsearch ( 1 , m, p ) -1 // p ( i ) = ( L [ i ] < a [ j ] ) You can implement your own b i n a r y search i f you want .

i f r > 0:

p[ j ] = p[ L [ r ]]

L[r] = j

else :

p [ j ] = L [m]

L . append ( j )

k = length ( L )

while k ! = - 1 :

s[ t ] = a[k]

t --

k = p[k]
Chapter 2

Number Theory

2.1 Modulo

Definition 2.1 (divisibility). We say that a non-zero integer b divides an integer a (written b`a ) if there exists q ∈ Ú such that

a = bq . In this case, we say that b is a divisor of a .

Notation 2.2 (congruence modulo m ). We write

a≡b (mod m )

to mean m`(a − b ). We read this as ‘a is equivalent to b modulo m ’.

Definition 2.3. Let m be a positive integer. We denote by a the set

a = {b ∈ Ú : a ≡ b (mod m )}.

We define Úm to be the set

Úm := {0, 1, . . . , m − 1}.

Definition 2.4 (addition on Úm ). Let m be a positive integer. We define addition +m on the elements of Úm as:

a +m b = a + b.

Theorem 2.5. Addition on Úm shares some similar properties to the normal addition + on integers.

1. It is closed.

This means that if you add any two elements of Úm , the result will be in Úm . Indeed:

a +m b = a + b = r

where r is an integer 0 ≤ r < m such that a + b = q m + r , as in theorem 1.1.

2. It has an additive identity, 0. We sometimes call this the zero element.

This means that a + 0 = a for any element a ∈ Úm .

3. Any element a has an additive inverse −a = m − a .

This means that a +m −a is equal to the additive identity for any element a ∈ Úm .

15
16 2.2 Prime Factorization

Definition 2.6 (multiplication on Úm ). Let m be a positive integer. We define addition ·m on the elements of Úm as:

a ·m b = a · b.

Theorem 2.7. Multiplication on Úm shares some similar properties to the normal multiplication · on integers.

1. It is closed.

2. It has an multiplicative identity, 1.

This means that a ·m 1 = a for any non-zero element a .

Notation 2.8. Instead of writing a +m b and a ·m b , we write

a + b (mod m ) and ab (mod m )

respectively.

Example 2.9. We prove the divisibility rule by 9: an integer x is divisible by 9 if and only if the sum of its digits is divisible by 9.

Proof. Let x = a d ·10d + a d −1 ·10d −1 +. . .+ a 1 ·10+ a 0 . Note that 10 ≡ 1 (mod 9). Hence, 10n ≡ 1 (mod 9) for any non-negative

integer n . Thus,

x = a d · 10d + a d −1 · 10d −1 + . . . + a 1 · 10 + a 0 ≡ a d + a d −1 + . . . + a 1 + a 0 (mod 9).

This proves the claim. 

2.2 Prime Factorization

There are many algorithms to determine prime factors of numbers. Most of these involve choosing random integers along the

way, which makes them unappealing for programming contests. And so, for most problems in programming contests, only prime

factorizations of small numbers are needed to be factored in order to complete the algorithm. As these algorithms are very much

standard, they will probably be used in conjuction with other algorithms. Hence, it is vital to know these basic number-theoretical

algorithms.

Definition 2.10. A prime number p is an integer greater than 1 that has no positive integer divisors other than 1 and p itself.

Algorithm 2.11 (Sieve of Eratosthenes). sieve(N)


Input an integer N

Output an array primeBa of length N such that for any n ≤ N , primeBa[n] = 1 if n is prime and primeBa[n] = 0,
otherwise.

1. Initialize the elements of primeBa to 1.

2. Let primeBa[1] = 0.

3. For each p = 2, 3, 4, 5, . . .:

• If primeBa[p] is equal to 0, continue to the next value for p .

• For each integer k > 1 such that k p ≤ N , set primeBa[k*p] = 0.

4. Return primeBa.
Number Theory 17

Remark 2.12. The running time for this algorithm is O (N log log N ). In the algorithm, we make around

N N N N N
+ + + + +...
2 3 5 7 11

operations. And we use the fact (which we will not prove) that

X 1
p , prime, p ≤N
p

is approximately equal to log log N when N is very large.

Implementation 2.13. Here is an implementation of algorithm 2.11.

s i e v e (N ) :

primeBa [ 1 ] = 0

for k = 2 , 3 , 4 , 5 , . . . , N

primeBa [ k ] = 1

for p = 2 , 3 , 4 , 5 , . . . , N

i f primeBa [ p ] == 0 :

continue

for i = 2 , 3 , 4 , 5 , . . .

i f i *p > N :

break

primeBa [ i *p ] = 0

r e t u r n primeBa

Theorem 2.14 (fundamental theorem of arithmetic). Any positive integer n > 1 can be expressed as

n = p 1e 1 p 2e 2 · · · p te t (2.15)

where the p i are distinct prime factors and e i are positive integers.

Remark 2.16. We call the expression in 2.15 the prime factorization of n .

Remark 2.17. Algorithm 2.11 can be modified to find the prime factorization of all integers below N .

Definition 2.18 (greatest common divisor). The greatest common divisor of two integers a and b is the largest integer d such that

d `a and d `b .

Theorem 2.19. Let

a = p 1e 1 p 2e 2 · · · p te t and b = p 1f1 p 2f2 · · · p tft

be the prime factorization of two positive integers a and b (some of the e i and fi may be zero). The greatest common divisor,

gcd(a, b ), of a and b is:


max {e 1 ,f1 }
gcd(a, b ) = p 1 p 2max {e 2 ,f2 } · · · p tmax {e t ,ft } .

Theorem 2.20. Here are some properties concerning the greatest common divisor:

1. gcd(a, 0) = 0.

2. If b < a , then gcd(a, b ) = gcd(b, a − b ).

3. If b < a , then gcd(a, b ) = gcd(b, r ) for r such that r ≡ a (mod b ) and 0 ≤ r < b .

Optional Exercise 2.21. Prove theorem 2.20. Hint: Use definition 2.1 when appropriate.
18 2.2 Prime Factorization

From theorem 2.20, we come up with this simple algorithm to find the greatest common divisor of two integers a and b :

Algorithm 2.22 (Euclidean algorithm). gcd(a, b)


Input two non-negative integers a and b

Output d = gcd(a, b )

1. If a < b , return gcd(b, a).

2. If b = 0, return a .

3. Return gcd(a%b, b).

Remark 2.23. The complexity of algorithm 2.22 is O (log n ).

Implementation 2.24. Here is an implementation of algorithm 2.22.

gcd ( a , b ) :

if a < b:

r e t u r n gcd ( b , a )

i f b == 0 :

return a

r e t u r n gcd ( a%b , b )
Chapter 3

Combinatorics

Problem 3.1. How many ways can one arrange n distinct objects in a row of n ?

There are

n ! = n · (n − 1) · (n − 2) · · · · · 3 · 2 · 1

ways to arrange n distinct objects in a row. You have n choices for the first one. Once you’ve set that, you will have n − 1 choices

for the next and then n − 2 and so on until you have 1 object left for the n th slot.

Problem 3.2. How many ways can one arrange n distinct objects in a row of k where k ≤ n ?

You will have n choices for the first slot, n − 1 for the second, all the way down to the k th slot which will have n − k + 1. Hence,

you will have

n · (n − 1) · n − 2 · · · · · (n − k + 1).

You can also get the same answer by first considering a solution for the first problem, and then taking note that the order of the

last n − k elements does not matter (because they will not be in the row anyway). So you can jumble them up in whatever order

you like. And from the answer to the problem 3.1, there are (n − k )! ways to jumble them up. So with the original answer n !, you

treat the (n − k )! arrangements of the last n − k items as 1. And so, you get the same answer

n!
= n · (n − 1) · n − 2 · · · · · (n − k + 1).
(n − k )!

Problem 3.3. How many ways can one choose k different objects from n distinct objects where k ≤ n ?

Starting from the answer to problem 3.2, we are now allowed to jumble the first k objects as well. That means the k ! different

arrangements of the first k items will be treated as 1. Hence, we end up with the formula:

n!
!
n
= .
k k ! (n − k )!
Remark 3.4. Zero factorial is equal to the empty product. That is 0! = 1.

While the solutions presented in the above problems are intuitive, we have another way to approach these kinds of problems.

Take for example problem 3.3. Let’s abbreviate problem 3.3 as P (n, k ). This denotes the problem of choosing k different objects

from n distinct objects. If we had a solution for P (n − 1, k − 1), then we can add an n th element as a chosen one. We could also

take a solution for P (n − 1, k ). In this case, we can add an n th element as an element which is not chosen (since the solution

for P (n − 1, k ) already has k chosen elements). Hence, we have the relation

n −1 n −1
! ! !
n
= + . (3.5)
k k −1 k

19
20

This value is called the binomial coefficient. These numbers arise in different contexts. For example, the polynomial obtained

by raising x + 1 to the n th power will be equal to


! ! ! ! !
n n n 2 n n
( x + 1) n
= x +
n
x n−1
+ x n−2
+...+ x + x+ .
n −1 n −2 2 1 0

Remark 3.6. Here are some properties of the binomial coefficient.

1. For 0 ≤ k ≤ n , we have k = n−k .


n n 

2. For 0 < k < n , we have k is divisble by n .


n

Optional Exercise 3.7. What is the remainder when 4269 + 2669 is divided by 69?

Optional Exercise 3.8. There are also a lot of other variations of counting problems. Here are some of them. Try using the answers

to the previous problems, or use a similar technique to arrive at the answer.

1. How many distinct strings can you obtain by rearranging the letters of the string BEBEBIBIGURL?

2. How many ways can you seat yourself and N − 1 other people in a row if you insist on sitting with your one true love?

3. How many ways can you seat yourself and N − 1 other people in a row if your one true love insists on sitting at least one

seat away from you?

4. Let S be a positive integer. How many positive integer solutions does x 1 + x 2 + . . . + x n = S have?

5. Let S be a positive integer. How many non-negative integer solutions does x 1 + x 2 + . . . + x n = S have?

6. How many ways can you assign n different pigeons in k different holes?

7. How many ways can you rearrange n pairs of ( and ) in a string such that each ( is paired up with a ) somewhere on its
right?

Remark 3.9. One can implement a function which returns the binomial coefficient by means of a recursive function. If memory

allows, one can do better by remembering all the results of the computations to avoid solving for the same thing twice.
Chapter 4

Exercises

Send an email on or before 11:59PM of 2 April 2017 (Sunday) to [email protected] containing (at least) the following:

• Your Codeforces, UVa, HackerRank, and ProjectEuler usernames.

• Links to the source code you submitted for the required Codeforces problems.

• Code (attached) used for the required ProjectEuler and UVa problems.

• Text, PDF or photo (attached) of all required proofs.

4.1 Required Exercises

1. Proof of 1.34

2. CF 215A: Points on Line

3. Project Euler 133: Repunit nonfactors

4. UVa 612: DNA Sorting

5. UVa 10113: Exchange Rates

6. UVa 10534: Wavio Sequence

7. UVa 10665: Contemplation! Algebra

8. UVa 13083: Yet Another GCDSUM

9. At least one of:

(a). UVa 10247: Complete Tree Labeling (BigInt)

(b). HackerRank Mirror : Complete Tree Labeling (Mod)

21
22 4.2 Optional Exercises

4.2 Optional Exercises

• UVa 11378: Bey Battle

• CF 294C: Shaass and Lights

• CF 327C: Magic Five

• CF 577B: Modulo Sum

• CF 272D: Dima and Two Sequences

• Project Euler 147: Rectangles in cross-hatched grids

• UVa 10229: Modular Fibonacci

• UVa 12192: Grapevine

You might also like