0% found this document useful (0 votes)
22 views22 pages

Slide 2

These are the Java algorithm lessons that we're studying them at university.

Uploaded by

farrokhisar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views22 pages

Slide 2

These are the Java algorithm lessons that we're studying them at university.

Uploaded by

farrokhisar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Algorithm

Need of Algorithm

► 1. To understand the basic idea of the problem.


► 2. To find an approach to solve the problem.
► 3. To improve the efficiency of existing techniques.
► 4. To understand the basic principles of designing the algorithms.
► 5. To compare the performance of the algorithm with respect to other
techniques.
► 6. It is the best method of description without describing the implementation
detail.
► 7. The Algorithm gives a clear description of requirements and goal of the
problem to the designer.
► 8. A good design can produce a good solution.
► 9. To understand the flow of the problem.
Continued…

► 10. To measure the behavior (or performance) of the methods in all cases
(best cases, worst cases, average cases)
► 11. With the help of an algorithm, we can also identify the resources
(memory, input-output) cycles required by the algorithm.
► 12. With the help of algorithm, we convert art into a science.
► 13. To understand the principle of designing.
► 14. We can measure and analyze the complexity (time and space) of the
problems concerning input size without implementing and running it; it will
reduce the cost of design.
Complexity of Algorithm

► The term algorithm complexity measures how many steps are required by the
algorithm to solve the given problem. It evaluates the order of count of
operations executed by an algorithm as a function of input data size.
► To assess the complexity, the order (approximation) of the count of operation
is always considered instead of counting the exact steps.
► O(f) notation represents the complexity of an algorithm, which is also termed
as an Asymptotic notation or "Big O" notation. Here the f corresponds to the
function whose size is the same as that of the input data. The complexity of
the asymptotic computation O(f) determines in which order the resources
such as CPU time, memory, etc. are consumed by the algorithm that is
articulated as a function of the size of the input data.
Continued…

► The complexity can be found in any form such as constant, logarithmic,


linear, n*log(n), quadratic, cubic, exponential, etc. It is nothing but the order
of constant, logarithmic, linear and so on, the number of steps encountered
for the completion of a particular algorithm. To make it even more precise,
we often call the complexity of an algorithm as "running time".
Typical Complexities of an Algorithm

► Constant Complexity:
► It imposes a complexity of O(1). It undergoes an execution of a constant
number of steps like 1, 5, 10, etc. for solving a given problem. The count of
operations is independent of the input data size.
► Logarithmic Complexity:
► It imposes a complexity of O(log(N)). It undergoes the execution of the order
of log(N) steps. To perform operations on N elements, it often takes the
logarithmic base as 2.
► For N = 1,000,000, an algorithm that has a complexity of O(log(N)) would
undergo 20 steps (with a constant precision). Here, the logarithmic base does
not hold a necessary consequence for the operation count order, so it is
usually omitted.
Linear Complexity:

► It imposes a complexity of O(N). It encompasses the same number of steps as


that of the total number of elements to implement an operation on N
elements.
► For example, if there exist 500 elements, then it will take about 500 steps.
Basically, in linear complexity, the number of elements linearly depends on
the number of steps. For example, the number of steps for N elements can be
N/2 or 3*N.
► It also imposes a run time of O(n*log(n)). It undergoes the execution of the
order N*log(N) on N number of elements to solve the given problem.
► For a given 1000 elements, the linear complexity will execute 10,000 steps for
solving a given problem.
Continued…

► Quadratic Complexity: It imposes a complexity of O(n2). For N input data


size, it undergoes the order of N2 count of operations on N number of
elements for solving a given problem.
► If N = 100, it will endure 10,000 steps. In other words, whenever the order of
operation tends to have a quadratic relation with the input data size, it
results in quadratic complexity. For example, for N number of elements, the
steps are found to be in the order of 3*N2/2.
► Cubic Complexity: It imposes a complexity of O(n3). For N input data size, it
executes the order of N3 steps on N elements to solve a given problem.
► For example, if there exist 100 elements, it is going to execute 1,000,000
steps.
Continued…

► Exponential Complexity: It imposes a complexity of O(2n), O(N!), O(nk), ….


For N elements, it will execute the order of count of operations that is
exponentially dependable on the input data size.
► For example, if N = 10, then the exponential function 2N will result in 1024.
Similarly, if N = 20, it will result in 1048 576, and if N = 100, it will result in a
number having 30 digits. The exponential function N! grows even faster; for
example, if N = 5 will result in 120. Likewise, if N = 10, it will result in
3,628,800 and so on.
Note:

► Since the constants do not hold a significant effect on the order of count
of operation, so it is better to ignore them. Thus, to consider an algorithm
to be linear and equally efficient, it must undergo N, N/2 or 3N count of
operation, respectively, on the same number of elements to solve a
particular problem.
How to approximate the time taken by
the Algorithm?
► So, to find it out, we shall first understand the types of the algorithm we
have. There are two types of algorithms:
► Iterative Algorithm: In the iterative approach, the function repeatedly runs until
the condition is met or it fails. It involves the looping construct.
► Recursive Algorithm: In the recursive approach, the function calls itself until the
condition is met. It integrates the branching structure.
► However, it is worth noting that any program that is written in iteration could
be written as recursion. Likewise, a recursive program can be converted to
iteration, making both of these algorithms equivalent to each other.
Continued…

► But to analyze the iterative program, we have to count the number of times
the loop is going to execute, whereas in the recursive program, we use
recursive equations, i.e., we write a function of F(n) in terms of F(n/2).
► Suppose the program is neither iterative nor recursive. In that case, it can be
concluded that there is no dependency of the running time on the input data
size, i.e., whatever is the input size, the running time is going to be a
constant value. Thus, for such programs, the complexity will be O(1).
For Iterative Programs

► Consider the following programs that are written in simple English and
does not correspond to any syntax.
► In the first example, we have an integer i and a for loop running from i
equals 1 to n. Now the question arises, how many times does the name get
printed?
1. A
2. {
3. int i;
4. for (i=1 to n)
5. printf("Edward");
6. }
Continued…

► Since i equals 1 to n, so the above program will print Edward, n number of


times. Thus, the complexity will be O(n).
Example 2:

1. A
2. {
3. int i, j:
4. for (i=1 to n)
5. for (j=1 to n)
6. printf("Edward");
7. }
► In this case, firstly, the outer loop will run n times, such that for each time,
the inner loop will also run n times. Thus, the time complexity will be O(n2).
Example3:

1. A
2. {
3. i = 1; S = 1;
4. while S<=n)
5. {
6. i++;
7. SS = S + i;
8. printf("Edward");
9. }
10. }
Continued…

► As we can see from the above example, we have two variables; i, S and then we
have while S<=n, which means S will start at 1, and the entire loop will stop
whenever S value reaches a point where S becomes greater than n.
► Here i is incrementing in steps of one, and S will increment by the value of i, i.e.,
the increment in i is linear. However, the increment in S depends on the i.
► Initially;
► i=1, S1
► After 1st iteration;
► i=2, S3
► After 2nd iteration;
► i=3, S6
► After 3rd iteration;
► i=4, S10 … and so on.
Continued…


For Recursive Program

1. A(n)
2. {
3. if (n>1
4. return A(n-1))
5. }
► Solution;
► Here we will see the simple Back Substitution method to solve the above
problem.
► T(n) = 1  T(n-1) …Eqn. 1
Continued…

► Step1 Substitute n-1 at the place of n in Eqn. 1


► T(n-1) = 1  T(n-2) ...Eqn. 2
► Step2 Substitute n-2 at the place of n in Eqn. 1
► T(n-2) = 1  T(n-3) …Eqn. 3
► Step3 Substitute Eqn. 2) in Eqn. 1
► T(n)= 1  1 T(n-2) = 2  T(n-2) …Eqn. 4
► Step4 Substitute eqn. 3) in Eqn. 4
► T(n) = 2  1  T(n-3) = 3  T(n-3) = …... = k + T(n-k) …Eqn. 5
Continued…

► Now, according to Eqn. 1, i.e. T(n) = 1  T(n-1), the algorithm will run until
n>1. Basically, n will start from a very large number, and it will decrease
gradually. So, when T(n) = 1, the algorithm eventually stops, and such a
terminating condition is called anchor condition, base condition or
stopping condition.
► Thus, for k = n-1, the T(n) will become.
► Step5 Substitute k = n-1 in eqn. 5
► T(n) = (n-1) + T(n-(n-1)) = (n-1) + T1 = n-11
► Hence, T(n) = n or O(n).
?

You might also like