0% found this document useful (0 votes)
43 views

Extra Class 2

Uploaded by

melt.bonier.0z
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Extra Class 2

Uploaded by

melt.bonier.0z
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

extra-class-2

January 2, 2024

0.1 CMPE 150 Extra classes 2)


[ ]:

0.2 Topic 1) Recursive functions (Dynamic Programming):


0.2.1 Understanding Recursive Functions
Definition: A recursive function is a function that calls itself during its execution. This enables
the function to repeat its behavior until a certain condition is met, which is known as the base case.

Key Components:
1. Base Case: The condition under which the recursion stops. It prevents infinite loops by
providing a simple, non-recursive exit.
2. Recursive Case: The part of the function that includes the recursive call, allowing the
function to break down complex problems into simpler ones.

0.2.2 Advantages of Recursive Functions:


1. Simplifies Code: Particularly useful for solving complex problems that can be broken down
into simpler, identical sub-problems.
2. Intuitive: Makes algorithms like tree traversals and divide-and-conquer strategies more read-
able and intuitive.
3. Efficiency: Significantly reduces the time complexity from exponential to polynomial in
many cases.

0.2.3 Disadvantages:
1. Overhead: Each recursive call adds a new layer to the call stack, which can lead to significant
overhead.
2. Risk of Infinite Loops: If the base case is not properly defined, it can result in infinite
recursion, leading to a stack overflow.
( Overhead: any combination of excess or indirect computation time, memory, bandwidth, or other
resources that are required to perform a specific task)

[ ]:

1
0.2.4 Relationship to Soft Induction (Weak Induction)
Soft induction, also known as weak induction, is a method of mathematical proof used to demon-
strate the truth of statements for all natural numbers. It is based on two key components:
1. Base Case: Prove that the statement P(k) holds true for an initial value ( k ), usually ( k
= 0 ) or ( k = 1 ).
2. Inductive Step: Assume that P(n) is true for some arbitrary natural number ( n ≥ k ) (the
inductive hypothesis). Then show that under this assumption, ( P(n+1) ) is also true.
By successfully demonstrating both the base case and the inductive step, we can conclude that
the statement ( P(n) ) holds for all natural numbers ( n ≥ k ) according to the principle of soft
induction.
[ ]:

MAIN IDEA: INSTEAD OF SOLVING THE PROBLEM OF SIZE N , ASSUME YOU CAN
SOLVE SIZE N-1 and BUILD UP AN ALGORITHM ON TOP ON THAT.
[ ]:

0.2.5 Example 1: Factorial Calculation


Problem: Calculate the factorial of a number, where the factorial of n (denoted as n!) is the
product of all positive integers less than or equal to n.
[61]: def bad_factorial(n):

k = 1
for i in range(1,n+1):

k = k * i

return k

[62]: bad_factorial(5)

120

[ ]:

[56]: def factorial(n:int) -> int:


# Base case: factorial of 1 is 1
if n == 1:
return 1
# Recursive case: n! = n * (n-1)!
else:
return n * factorial(n - 1)

2
# Example usage
print(factorial(5)) # Output: 120
print(factorial(9)) # Output: 362880

120
362880

[ ]:

0.2.6 Example 2: Factorial Calculation


Problem: Generate the nth number in the Fibonacci sequence, where each number is the sum of
the two preceding ones, starting from 0 and 1.
0 1 1 2 3 5 8 13 21 34 …
[64]: def horrible_fibonacci(n):
x1 = 1
x2 = 1

if n == 1:
return x1
if n == 2:
return x2

my_list = [1, 1]
for i in range(3,n+1):
new_number = my_list[i-1-1] + my_list[i-2-1]
my_list.append(new_number)

return my_list[n-1]

[65]: horrible_fibonacci(7) , horrible_fibonacci(5) , horrible_fibonacci(9)

[65]: (13, 5, 34)

[67]: def fibonacci(n):


# Base cases
if n == 0:
return 0
elif n == 1:
return 1
# Recursive case
else:
return fibonacci(n - 1) + fibonacci(n - 2)

# Example usage
print(fibonacci(10)) # Output: 13

3
55

[ ]:

Top-Down (Memoization):
This approach involves writing the recursive algorithm and storing the results of subproblems in a
table (generally using a hash table or array).

[69]: def fibonacci_memo(n, memo={}):


if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fibonacci_memo(n-1, memo) + fibonacci_memo(n-2, memo)
return memo[n]

[73]: fibonacci_memo(102)

[73]: 927372692193078999176

[ ]:

Bottom-Up (Tabulation):
This approach involves filling up a DynamicProgramming table by solving and storing the results
of smaller subproblems first, which are then used to solve larger subproblems.
[74]: def fibonacci_tabulation(n):
if n <= 1:
return n
fib_table = [0, 1]
for i in range(2, n + 1):
fib_table.append(fib_table[i-1] + fib_table[i-2])
return fib_table[n]

[ ]:

0.2.7 Example 3: Tower of Hanoi


Problem: The Tower of Hanoi is a classic problem that can be solved using recursion.
The function moves n disks from a source to a target using an auxiliary peg.
The base case is when there’s only one disk to move.
The recursive case moves n-1 disks to the auxiliary peg, moves the nth disk to the target, and then
moves the n-1 disks from the auxiliary to the target.
[10]: def tower_of_hanoi(n, source, target, auxiliary):
if n == 1:
print(f"Move disk 1 from {source} to {target}")

4
return
tower_of_hanoi(n-1, source, auxiliary, target)
print(f"Move disk {n} from {source} to {target}")
tower_of_hanoi(n-1, auxiliary, target, source)

# Example usage
tower_of_hanoi(3, 'A', 'C', 'B')

Move disk 1 from A to C


Move disk 2 from A to B
Move disk 1 from C to B
Move disk 3 from A to C
Move disk 1 from B to A
Move disk 2 from B to C
Move disk 1 from A to C
2^n - 1 steps needed in total
Lets watch the following animation
[75]: !python hanoi5.py

Move disc from A to C!


Move disc from A to B!
Move disc from C to B!
Move disc from A to C!
Move disc from B to A!
Move disc from B to C!
Move disc from A to C!
Move disc from A to B!
Move disc from C to B!
Move disc from C to A!
Move disc from B to A!
Move disc from C to B!
Move disc from A to C!
Move disc from A to B!
Move disc from C to B!
Move disc from A to C!
Move disc from B to A!
Move disc from B to C!
Move disc from A to C!
Move disc from B to A!
Move disc from C to B!
Move disc from C to A!
Move disc from B to A!
Move disc from B to C!
Move disc from A to C!
Move disc from A to B!
Move disc from C to B!

5
Move disc from A to C!
Move disc from B to A!
Move disc from B to C!
Move disc from A to C!

[ ]:

0.2.8 Example 4: Binary Search


Problem: Implement binary search on a sorted array to find the position of a given element.
Key words: half-interval search, logarithmic search or binary chop
[ ]:

[76]: def binary_search(arr, target_value, low=0, high=None):


# Initialize high the first time function is called
if high is None:
high = len(arr) - 1

# Base case: the element is not present


if low > high:
return -1

# Find the middle index


mid = (low + high) // 2

# Check if the middle element is the target value


if arr[mid] == target_value:
return mid
# If the target value is smaller than the middle element, continue search␣
↪in the left subarray

elif arr[mid] > target_value:


return binary_search(arr, target_value, low, mid - 1)
# If the target value is larger than the middle element, continue search in␣
↪the right subarray

else:
return binary_search(arr, target_value, mid + 1, high)

[77]: # Example usage


arr = [2, 3, 4, 10, 40]
x = 10
result = binary_search(arr, x)
print(f"Element is present at index {result}") # Output: Element is present at␣
↪index 3

Element is present at index 3

6
[79]: # Example usage
arr = [2, 3, 4, 10, 40]
x = 12
result = binary_search(arr, x)
print(f"Element is present at index {result}") # Output: Element is present at␣
↪index 3

Element is present at index -1

0.3 Topic 2) Introduction to the Two-Pointer Technique


The two-pointer technique is a common method for simplifying and optimizing certain types of
algorithmic problems. It involves maintaining two pointers (which can be array indices, list nodes,
etc.) and moving them in a specific manner to solve a problem.

When to Use Two Pointers


• Arrays or Linked Lists: Ideal for problems involving sequences like arrays or linked lists.
• Searching Pairs: Useful in problems where you need to find pairs that satisfy certain criteria
(e.g., sum to a certain value).
• Efficiency: Helps achieve linear time complexity, reducing the need for nested loops.

Types of Two-Pointer Techniques


1. Opposite Direction: Pointers start at opposite ends and move towards each other.
2. Same Direction: Pointers move in the same direction, one ahead of the other.
Opposite Direction:
1426136901
ab
Same Direction: 1 4 2 8 3 6 2 2 1 -1
j
i
a b

0.3.1 Example Problems


Example 1: Pair with Given Sum in an Array

Problem Statement: Find if there’s a pair of elements in a sorted array that sum up to a given
value.

Solution:
• Pointers start at opposite ends of the array.
• Move the left pointer right or the right pointer left depending on the sum of the current pair.

7
1 4 6 8 11 14 21
ab
[81]: def bad_target_pair_search(arr,target):

n = len(arr)
for i in range(n):
for j in range(n):
if arr[i] + arr[j] == target:
return True
return False

[87]: bad_target_pair_search([1, 2, 3, 4, 5], 11)

[87]: False

it has O(n^2) time complexity , so it is slow.

[ ]:

[52]: def find_pair_with_sum(arr, target_sum):


left , right = 0 , len(arr) - 1

while left < right:


current_sum = arr[left] + arr[right]
if current_sum == target_sum:
return True
elif current_sum < target_sum:
left += 1
else:
right -= 1

return False

# Example Usage
print(find_pair_with_sum([1, 2, 3, 4, 5], 9)) # Output: True

True
it has O(n) time complexity

[ ]:

Example 2: Remove Duplicates from Sorted Array

Problem Statement: Given a sorted array, remove the duplicates in-place such that each ele-
ment appears only once. ##### Solution: - Use two pointers moving in the same direction. -
One pointer tracks the position for replacing elements, the other iterates through the array.

8
[18]: def remove_duplicates(arr):
if not arr:
return 0

# First Pointer: j points to the last unique element's index


j = 0

# Iterate through the array with the second pointer: i


for i in range(1, len(arr)):
# Check if the current element is different from the last unique element
if arr[i] != arr[j]:
# Increment the first pointer (j) as a new unique element is found
j += 1
# Place the new unique element at the next position in the array
arr[j] = arr[i]

# The length of the array without duplicates is j + 1


return j + 1

# Example Usage
print(remove_duplicates([0, 0, 1, 1, 1, 2, 2, 3, 3, 4])) # Output: 5

[ ]:

Example 3: Minimum Window Substring

Problem Statement: Given an array nums and a value val, remove all instances of that value
in-place and return the new the array. The order of elements can be changed, and you must modify
the array in-place with O(1) extra memory.

Solution Using Two-Pointer Technique:


• The solution involves using two pointers: one (i) to iterate through the array, and the other
(j) to keep track of the position to place the next element that is not equal to val.

Example: Input: nums = [3, 2, 2, 3], val = 3


Output: 2, with nums modified to [2, 2]

Explanation: The function should remove all instances of val from nums, returning the new
length of the array. The elements beyond the new length are irrelevant.
[34]: def remove_element_in_place(nums, val):
j = 0
for i in range(len(nums)):
if nums[i] != val:

9
nums[j] = nums[i]
j += 1
# Elements beyond j are irrelevant
return nums[:j]

[35]: # Example Usage


nums = [3, 2, 2, 3]
new_nums = remove_element_in_place(nums, 3)
print(new_nums) # Output: [2, 2]

[2, 2]

[ ]:

0.4 Topic 3) Improving your algorithms


0.4.1 Prime Counting Methods Complexity Analysis
Step 1: Basic Prime Counting Method

Time Complexity:
• The is_prime_basic function checks every number from 2 to n-1 to see if n is prime, taking
O(n) time.
• The count_primes_basic function calls is_prime_basic for each number from 2 to N, lead-
ing to a total time complexity of O(N * n) = O(N²).

Space Complexity:
• O(1) - Constant space, as no extra space is proportional to the input size.

Step 2: Improved Prime Checking Using Square Root

Time Complexity:
• The is_prime_improved function checks every number from 2 to the square root of n to see
if n is prime, taking O(√n) time.
• The count_primes_improved function calls is_prime_improved for each number from 2 to
N, resulting in an approximate total time complexity of O(N * √n).

Space Complexity:
• O(1) - Constant space, as the improvement does not require additional space.

Step 3: Sieve of Eratosthenes

10
Time Complexity:
• The sieve iterates over each number up to N and marks its multiples as non-prime. The inner
loop runs approximately N/i times for each i, summing over all i to O(N log log N).
• This is significantly more efficient than the previous methods for large N.

Space Complexity:
• O(N) - An array of size N + 1 is used to keep track of prime numbers.

Conclusion
• The basic method has a quadratic time complexity, making it highly inefficient for large N.
• The improved method using the square root check has a better time complexity but is still
not efficient for very large N.
• The Sieve of Eratosthenes offers the best time efficiency with linearithmic complexity, al-
though it requires linear space.
[95]: def is_prime_basic(n):
if n <= 1:
return False
for i in range(2, n):
if n % i == 0:
return False
return True

def count_primes_basic(N):
count = 0
for i in range(2, N + 1):
if is_prime_basic(i):
count += 1
return count

# Example Usage
print(count_primes_basic(1000)) # Output: 4 (primes are 2, 3, 5, 7)

168

[96]: import math

def is_prime_improved(n):
if n <= 1:
return False
for i in range(2, int(math.sqrt(n)) + 1):
if n % i == 0:
return False
return True

def count_primes_improved(N):

11
count = 0
for i in range(2, N + 1):
if is_prime_improved(i):
count += 1
return count

# Example Usage
print(count_primes_improved(10)) # Output: 4

[97]: def count_primes_sieve(N):


if N < 2:
return 0
is_prime = [True] * (N + 1)
is_prime[0] = is_prime[1] = False

for i in range(2, int(math.sqrt(N)) + 1):


if is_prime[i]:
for j in range(i*i, N + 1, i):
is_prime[j] = False

return sum(is_prime)

# Example Usage
print(count_primes_sieve(10)) # Output: 4

4
How Sieve of Eratosthenes Works: Handle Edge Cases:
If N is less than 2, return 0, as there are no prime numbers less than 2. Initialize the is_prime List:
Create a list is_prime of length N + 1 and set all elements to True, indicating that initially, all
numbers are considered prime. Explicitly set the first two elements (is_prime[0] and is_prime[1])
to False, as 0 and 1 are not prime numbers. Implement the Sieve:
Iterate over each number i from 2 up to the square root of N. The square root is used as an
optimization, as any non-prime number greater than the square root of N will have already been
marked as non-prime by its smaller factors. If is_prime[i] is True, it means i is prime, and we need
to mark all multiples of i as non-prime (from i*i to N, in steps of i). This is because any multiple
of i cannot be prime.
Count and Return the Primes:
After the sieve process, all prime numbers up to N will have True in the corresponding index in the
is_prime list. Sum and return the True values in the is_prime list, which gives the count of prime
numbers up to N.
[ ]:

12
Why square root?
Explaining Looping Until Square Root of n in Prime Checking:
Fundamental Property of Factors: When checking if a number n is prime, we rely on the fact that
any non-prime number must have at least one factor (other than 1 and itself) that is less than or
equal to its square root. This is based on the following reasoning:
1. Factor Pairs: Every number n can be expressed as a product of two factors, say a and b,
such that n = a * b.
2. Factor Comparison: If both a and b were greater than the square root of n, then their
product a * b would be greater than n. This is because sqrt(n) * sqrt(n) = n, and if
both a and b are greater, a * b would exceed n.
3. Converse Implication: If n is not a prime number (meaning it has factors other than 1
and itself), then at least one of its factors must be less than or equal to sqrt(n).
Why Loop Until Square Root:
• Efficiency: Since if n is non-prime, it must have a factor less than or equal to sqrt(n), we
only need to check divisibility for numbers up to sqrt(n). If n is not divisible by any number
in this range, it cannot be divisible by larger numbers, and hence it is prime.
• Reduces Complexity: This significantly reduces the number of iterations in the loop,
especially for large numbers, transforming a potentially O(n) process into O(√n), which is a
substantial improvement.
Example:
• Consider n = 100. Its square root is 10.
• Factors of 100 are (1, 100), (2, 50), (4, 25), (5, 20), and (10, 10).
• Notice that for every factor greater than 10, there is a corresponding factor that is less than
10. Therefore, checking beyond 10 is redundant.
Conclusion:
Looping up to the square root of n is both sufficient and efficient for checking prime numbers.
It ensures that all possible factors are covered, while significantly reducing the computational
workload, particularly for large n. This method is a great example of how understanding the
properties of numbers can lead to more efficient algorithms.
[ ]:

Lets time these functions:


[98]: import time

# Timing the performance of each function with N = 10000


N = 100000

# Timing Basic Method


start_time = time.time()
count_primes_basic(N)
basic_duration = time.time() - start_time

13
# Timing Improved Method
start_time = time.time()
count_primes_improved(N)
improved_duration = time.time() - start_time

# Timing Sieve Method


start_time = time.time()
count_primes_sieve(N)
sieve_duration = time.time() - start_time

basic_duration, improved_duration, sieve_duration

[98]: (8.493777751922607, 0.05623483657836914, 0.0025758743286132812)

[28]: 8.90213680267334 / 0.04478573799133301 , 0.04478573799133301 / 0.


↪002756834030151367

[28]: (198.7716894247917, 16.245351552365303)

[ ]:

[ ]:

0.5 Topic 4) Linked Lists


IMPORATANT!!! : This data structure is significant. Although we won’t cover it this semester
due to time constraints, it’s a valuable asset to have in your arsenal for future endeavors.

0.5.1 Linked List as a Data Structure


A linked list is a linear data structure where each element is a separate object referred to as a node.
Each node contains the data and a reference to the next node in the sequence.

Components of a Linked List


1. Node: The fundamental part of a linked list which contains:
• Data: The value or data that is stored in the node.
• Next: A pointer or reference to the next node in the list.
2. Head: The first node in a linked list.
3. Tail: The last node in a linked list, which points to null or none, indicating the end of the
list.

Types of Linked Lists


1. Singly Linked List: Each node has one pointer to the next node.

14
2. Doubly Linked List: Each node has two pointers, one to the next node and another to the
previous node, allowing for easier bidirectional traversal.
3. Circular Linked List: The last node points back to the first node or any other node in the
list, making the list a circle of nodes.

Advantages of Linked Lists


1. Dynamic Size: Unlike arrays, linked lists are dynamic and can grow or shrink in size without
the need to reallocate or reorganize the entire data structure.
2. Ease of Insertion/Deletion: Adding or removing elements is relatively straightforward
and doesn’t require shifting elements as in an array.

Disadvantages of Linked Lists


1. Memory Usage: Each element in a linked list requires extra memory for the reference or
pointer.
2. Sequential Access: Elements cannot be accessed randomly; you must sequentially follow
the links, which can be time-consuming for large lists.
3. Complexity: More complex to implement and manage compared to arrays, especially for
doubly and circular linked lists.

Basic Operations
1. Traversal: Access each element of the linked list.
2. Insertion: Add elements to the list.
3. Deletion: Remove elements from the list.
4. Search: Find an element in the list.
5. Update: Change the value of an existing element.

Use Cases
• Linked lists are used in situations where efficient insertion and deletion of elements are re-
quired and memory allocation is dynamic.
• They are fundamental in creating more complex data structures like stacks, queues, graphs,
and more.
Understanding linked lists is crucial for anyone delving into data structures and algorithms as they
form the basis of many complex structures and operations in computer science.
[62]: class Node:
def __init__(self, data):
self.data = data
self.next = None

class LinkedList:
def __init__(self):
self.head = None

def printList(self):
temp = self.head

15
while temp:
print(temp.data, end=" ")
temp = temp.next

def append(self, new_data):


new_node = Node(new_data)
if self.head is None:
self.head = new_node
return
last = self.head
while last.next:
last = last.next
last.next = new_node

# Other methods like deleteNode, search, etc., can be added here

[ ]:

[63]: class Node:


def __init__(self, data):
self.data = data
self.next = None

def reverse_linked_list(head):
prev = None
current = head
while current:
next_node = current.next # Save next node
current.next = prev # Reverse the link
prev = current # Move prev up
current = next_node # Move current up
return prev # New head of the reversed list

# Helper functions to create and print the list (for demonstration)


def print_list(node):
while node:
print(node.data, end=" ")
node = node.next
print()

def append_to_list(head, data):


if head is None:
return Node(data)
else:
head.next = append_to_list(head.next, data)
return head

16
# Example Usage
head = None
for value in [1, 2, 3, 4, 5]:
head = append_to_list(head, value)

print("Original List:")
print_list(head)

head = reverse_linked_list(head)
print("Reversed List:")
print_list(head)

Original List:
1 2 3 4 5
Reversed List:
5 4 3 2 1

[ ]:

0.6 Topic 5) Sorting algorithms


Sadly we dont have time this semester….
[ ]:

[ ]:

[ ]:

[ ]:

0.7 Topic 6) Deep vs Shallow Copy


0.7.1 Deep Copy vs Shallow Copy
Understanding the difference between deep and shallow copying is crucial in programming, espe-
cially when working with complex data structures. Here’s a breakdown of the two concepts:

Shallow Copy A shallow copy creates a new object but does not create copies of nested objects.
Instead, it copies the references to those objects. Changes made in nested objects of a shallow
copied object will be reflected in the original object and vice versa.
• Python Implementation: Use the copy() method or the copy module’s copy() function
for a shallow copy.
• Implications: Any changes made to mutable objects in the shallow copy will affect the
original object since the nested objects are not actually copied.

17
[89]: ##### Example of Shallow Copy in Python:

import copy

original_list = [[1, 2, 3], [4, 5, 6]]


shallow_copied_list = copy.copy(original_list)

shallow_copied_list[0][0] = "Changed!"

print(shallow_copied_list)
print(original_list) # The original list is affected.

[['Changed!', 2, 3], [4, 5, 6]]


[['Changed!', 2, 3], [4, 5, 6]]

[ ]:

Deep Copy A deep copy creates a new object and recursively adds the copies of nested objects
present in the original. This means it copies all objects recursively, duplicating everything. A deep
copy doesn’t reflect changes made to the new copied object in the original object and vice versa.
• Python Implementation: Use the copy module’s deepcopy() function for a deep copy.
• Implications: The deep copy is completely independent of the original object, including all
nested objects.
[90]: import copy

original_list = [[1, 2, 3], [4, 5, 6]]


deep_copied_list = copy.deepcopy(original_list)

deep_copied_list[0][0] = "Changed Deeply!"

print(original_list) # The original list remains unchanged.


print(deep_copied_list)

[[1, 2, 3], [4, 5, 6]]


[['Changed Deeply!', 2, 3], [4, 5, 6]]

[ ]:

Key Differences
References: In a shallow copy, the copied object itself is a new object, but the elements withi

Recursion: Deep copy involves recursion in copying all nested objects, making it completely ind

Performance: Deep copying is generally slower and consumes more memory as it involves creating

18
When to Use Which
Shallow Copy: When you want a new container object with references to the original elements and

Deep Copy: When you need a completely independent copy of an object and all its nested objects,
Understanding these copying mechanisms is crucial for effective memory management and pre-
dictable program behavior, especially in languages like Python, where object references and mutable
types are common.
[ ]:

[ ]:

[ ]:

[ ]:

19

You might also like