Extra Class 2
Extra Class 2
January 2, 2024
Key Components:
1. Base Case: The condition under which the recursion stops. It prevents infinite loops by
providing a simple, non-recursive exit.
2. Recursive Case: The part of the function that includes the recursive call, allowing the
function to break down complex problems into simpler ones.
0.2.3 Disadvantages:
1. Overhead: Each recursive call adds a new layer to the call stack, which can lead to significant
overhead.
2. Risk of Infinite Loops: If the base case is not properly defined, it can result in infinite
recursion, leading to a stack overflow.
( Overhead: any combination of excess or indirect computation time, memory, bandwidth, or other
resources that are required to perform a specific task)
[ ]:
1
0.2.4 Relationship to Soft Induction (Weak Induction)
Soft induction, also known as weak induction, is a method of mathematical proof used to demon-
strate the truth of statements for all natural numbers. It is based on two key components:
1. Base Case: Prove that the statement P(k) holds true for an initial value ( k ), usually ( k
= 0 ) or ( k = 1 ).
2. Inductive Step: Assume that P(n) is true for some arbitrary natural number ( n ≥ k ) (the
inductive hypothesis). Then show that under this assumption, ( P(n+1) ) is also true.
By successfully demonstrating both the base case and the inductive step, we can conclude that
the statement ( P(n) ) holds for all natural numbers ( n ≥ k ) according to the principle of soft
induction.
[ ]:
MAIN IDEA: INSTEAD OF SOLVING THE PROBLEM OF SIZE N , ASSUME YOU CAN
SOLVE SIZE N-1 and BUILD UP AN ALGORITHM ON TOP ON THAT.
[ ]:
k = 1
for i in range(1,n+1):
k = k * i
return k
[62]: bad_factorial(5)
120
[ ]:
2
# Example usage
print(factorial(5)) # Output: 120
print(factorial(9)) # Output: 362880
120
362880
[ ]:
if n == 1:
return x1
if n == 2:
return x2
my_list = [1, 1]
for i in range(3,n+1):
new_number = my_list[i-1-1] + my_list[i-2-1]
my_list.append(new_number)
return my_list[n-1]
# Example usage
print(fibonacci(10)) # Output: 13
3
55
[ ]:
Top-Down (Memoization):
This approach involves writing the recursive algorithm and storing the results of subproblems in a
table (generally using a hash table or array).
[73]: fibonacci_memo(102)
[73]: 927372692193078999176
[ ]:
Bottom-Up (Tabulation):
This approach involves filling up a DynamicProgramming table by solving and storing the results
of smaller subproblems first, which are then used to solve larger subproblems.
[74]: def fibonacci_tabulation(n):
if n <= 1:
return n
fib_table = [0, 1]
for i in range(2, n + 1):
fib_table.append(fib_table[i-1] + fib_table[i-2])
return fib_table[n]
[ ]:
4
return
tower_of_hanoi(n-1, source, auxiliary, target)
print(f"Move disk {n} from {source} to {target}")
tower_of_hanoi(n-1, auxiliary, target, source)
# Example usage
tower_of_hanoi(3, 'A', 'C', 'B')
5
Move disc from A to C!
Move disc from B to A!
Move disc from B to C!
Move disc from A to C!
[ ]:
else:
return binary_search(arr, target_value, mid + 1, high)
6
[79]: # Example usage
arr = [2, 3, 4, 10, 40]
x = 12
result = binary_search(arr, x)
print(f"Element is present at index {result}") # Output: Element is present at␣
↪index 3
Problem Statement: Find if there’s a pair of elements in a sorted array that sum up to a given
value.
Solution:
• Pointers start at opposite ends of the array.
• Move the left pointer right or the right pointer left depending on the sum of the current pair.
7
1 4 6 8 11 14 21
ab
[81]: def bad_target_pair_search(arr,target):
n = len(arr)
for i in range(n):
for j in range(n):
if arr[i] + arr[j] == target:
return True
return False
[87]: False
[ ]:
return False
# Example Usage
print(find_pair_with_sum([1, 2, 3, 4, 5], 9)) # Output: True
True
it has O(n) time complexity
[ ]:
Problem Statement: Given a sorted array, remove the duplicates in-place such that each ele-
ment appears only once. ##### Solution: - Use two pointers moving in the same direction. -
One pointer tracks the position for replacing elements, the other iterates through the array.
8
[18]: def remove_duplicates(arr):
if not arr:
return 0
# Example Usage
print(remove_duplicates([0, 0, 1, 1, 1, 2, 2, 3, 3, 4])) # Output: 5
[ ]:
Problem Statement: Given an array nums and a value val, remove all instances of that value
in-place and return the new the array. The order of elements can be changed, and you must modify
the array in-place with O(1) extra memory.
Explanation: The function should remove all instances of val from nums, returning the new
length of the array. The elements beyond the new length are irrelevant.
[34]: def remove_element_in_place(nums, val):
j = 0
for i in range(len(nums)):
if nums[i] != val:
9
nums[j] = nums[i]
j += 1
# Elements beyond j are irrelevant
return nums[:j]
[2, 2]
[ ]:
Time Complexity:
• The is_prime_basic function checks every number from 2 to n-1 to see if n is prime, taking
O(n) time.
• The count_primes_basic function calls is_prime_basic for each number from 2 to N, lead-
ing to a total time complexity of O(N * n) = O(N²).
Space Complexity:
• O(1) - Constant space, as no extra space is proportional to the input size.
Time Complexity:
• The is_prime_improved function checks every number from 2 to the square root of n to see
if n is prime, taking O(√n) time.
• The count_primes_improved function calls is_prime_improved for each number from 2 to
N, resulting in an approximate total time complexity of O(N * √n).
Space Complexity:
• O(1) - Constant space, as the improvement does not require additional space.
10
Time Complexity:
• The sieve iterates over each number up to N and marks its multiples as non-prime. The inner
loop runs approximately N/i times for each i, summing over all i to O(N log log N).
• This is significantly more efficient than the previous methods for large N.
Space Complexity:
• O(N) - An array of size N + 1 is used to keep track of prime numbers.
Conclusion
• The basic method has a quadratic time complexity, making it highly inefficient for large N.
• The improved method using the square root check has a better time complexity but is still
not efficient for very large N.
• The Sieve of Eratosthenes offers the best time efficiency with linearithmic complexity, al-
though it requires linear space.
[95]: def is_prime_basic(n):
if n <= 1:
return False
for i in range(2, n):
if n % i == 0:
return False
return True
def count_primes_basic(N):
count = 0
for i in range(2, N + 1):
if is_prime_basic(i):
count += 1
return count
# Example Usage
print(count_primes_basic(1000)) # Output: 4 (primes are 2, 3, 5, 7)
168
def is_prime_improved(n):
if n <= 1:
return False
for i in range(2, int(math.sqrt(n)) + 1):
if n % i == 0:
return False
return True
def count_primes_improved(N):
11
count = 0
for i in range(2, N + 1):
if is_prime_improved(i):
count += 1
return count
# Example Usage
print(count_primes_improved(10)) # Output: 4
return sum(is_prime)
# Example Usage
print(count_primes_sieve(10)) # Output: 4
4
How Sieve of Eratosthenes Works: Handle Edge Cases:
If N is less than 2, return 0, as there are no prime numbers less than 2. Initialize the is_prime List:
Create a list is_prime of length N + 1 and set all elements to True, indicating that initially, all
numbers are considered prime. Explicitly set the first two elements (is_prime[0] and is_prime[1])
to False, as 0 and 1 are not prime numbers. Implement the Sieve:
Iterate over each number i from 2 up to the square root of N. The square root is used as an
optimization, as any non-prime number greater than the square root of N will have already been
marked as non-prime by its smaller factors. If is_prime[i] is True, it means i is prime, and we need
to mark all multiples of i as non-prime (from i*i to N, in steps of i). This is because any multiple
of i cannot be prime.
Count and Return the Primes:
After the sieve process, all prime numbers up to N will have True in the corresponding index in the
is_prime list. Sum and return the True values in the is_prime list, which gives the count of prime
numbers up to N.
[ ]:
12
Why square root?
Explaining Looping Until Square Root of n in Prime Checking:
Fundamental Property of Factors: When checking if a number n is prime, we rely on the fact that
any non-prime number must have at least one factor (other than 1 and itself) that is less than or
equal to its square root. This is based on the following reasoning:
1. Factor Pairs: Every number n can be expressed as a product of two factors, say a and b,
such that n = a * b.
2. Factor Comparison: If both a and b were greater than the square root of n, then their
product a * b would be greater than n. This is because sqrt(n) * sqrt(n) = n, and if
both a and b are greater, a * b would exceed n.
3. Converse Implication: If n is not a prime number (meaning it has factors other than 1
and itself), then at least one of its factors must be less than or equal to sqrt(n).
Why Loop Until Square Root:
• Efficiency: Since if n is non-prime, it must have a factor less than or equal to sqrt(n), we
only need to check divisibility for numbers up to sqrt(n). If n is not divisible by any number
in this range, it cannot be divisible by larger numbers, and hence it is prime.
• Reduces Complexity: This significantly reduces the number of iterations in the loop,
especially for large numbers, transforming a potentially O(n) process into O(√n), which is a
substantial improvement.
Example:
• Consider n = 100. Its square root is 10.
• Factors of 100 are (1, 100), (2, 50), (4, 25), (5, 20), and (10, 10).
• Notice that for every factor greater than 10, there is a corresponding factor that is less than
10. Therefore, checking beyond 10 is redundant.
Conclusion:
Looping up to the square root of n is both sufficient and efficient for checking prime numbers.
It ensures that all possible factors are covered, while significantly reducing the computational
workload, particularly for large n. This method is a great example of how understanding the
properties of numbers can lead to more efficient algorithms.
[ ]:
13
# Timing Improved Method
start_time = time.time()
count_primes_improved(N)
improved_duration = time.time() - start_time
[ ]:
[ ]:
14
2. Doubly Linked List: Each node has two pointers, one to the next node and another to the
previous node, allowing for easier bidirectional traversal.
3. Circular Linked List: The last node points back to the first node or any other node in the
list, making the list a circle of nodes.
Basic Operations
1. Traversal: Access each element of the linked list.
2. Insertion: Add elements to the list.
3. Deletion: Remove elements from the list.
4. Search: Find an element in the list.
5. Update: Change the value of an existing element.
Use Cases
• Linked lists are used in situations where efficient insertion and deletion of elements are re-
quired and memory allocation is dynamic.
• They are fundamental in creating more complex data structures like stacks, queues, graphs,
and more.
Understanding linked lists is crucial for anyone delving into data structures and algorithms as they
form the basis of many complex structures and operations in computer science.
[62]: class Node:
def __init__(self, data):
self.data = data
self.next = None
class LinkedList:
def __init__(self):
self.head = None
def printList(self):
temp = self.head
15
while temp:
print(temp.data, end=" ")
temp = temp.next
[ ]:
def reverse_linked_list(head):
prev = None
current = head
while current:
next_node = current.next # Save next node
current.next = prev # Reverse the link
prev = current # Move prev up
current = next_node # Move current up
return prev # New head of the reversed list
16
# Example Usage
head = None
for value in [1, 2, 3, 4, 5]:
head = append_to_list(head, value)
print("Original List:")
print_list(head)
head = reverse_linked_list(head)
print("Reversed List:")
print_list(head)
Original List:
1 2 3 4 5
Reversed List:
5 4 3 2 1
[ ]:
[ ]:
[ ]:
[ ]:
Shallow Copy A shallow copy creates a new object but does not create copies of nested objects.
Instead, it copies the references to those objects. Changes made in nested objects of a shallow
copied object will be reflected in the original object and vice versa.
• Python Implementation: Use the copy() method or the copy module’s copy() function
for a shallow copy.
• Implications: Any changes made to mutable objects in the shallow copy will affect the
original object since the nested objects are not actually copied.
17
[89]: ##### Example of Shallow Copy in Python:
import copy
shallow_copied_list[0][0] = "Changed!"
print(shallow_copied_list)
print(original_list) # The original list is affected.
[ ]:
Deep Copy A deep copy creates a new object and recursively adds the copies of nested objects
present in the original. This means it copies all objects recursively, duplicating everything. A deep
copy doesn’t reflect changes made to the new copied object in the original object and vice versa.
• Python Implementation: Use the copy module’s deepcopy() function for a deep copy.
• Implications: The deep copy is completely independent of the original object, including all
nested objects.
[90]: import copy
[ ]:
Key Differences
References: In a shallow copy, the copied object itself is a new object, but the elements withi
Recursion: Deep copy involves recursion in copying all nested objects, making it completely ind
Performance: Deep copying is generally slower and consumes more memory as it involves creating
18
When to Use Which
Shallow Copy: When you want a new container object with references to the original elements and
Deep Copy: When you need a completely independent copy of an object and all its nested objects,
Understanding these copying mechanisms is crucial for effective memory management and pre-
dictable program behavior, especially in languages like Python, where object references and mutable
types are common.
[ ]:
[ ]:
[ ]:
[ ]:
19