Efficient Coding - LEX
Efficient Coding - LEX
This course will guide you through different coding practices which results in efficient coding.Course
provides the learner some basic concepts of programming such as Time
complexity,Profiling,Recursion,Functional Programming.It also explains advanced algorithms such as
Greedy approach,Divide & Conquer and Dynamic Programming.
Learner will be able to learn different types of algorithms basics as well as basics which includes
Time Complexity, space complexity, Recursion,Functional Programming.Advanced algorithms include
Greedy Approach , Divide and conquer and Dynamic Programming.Learner will also be able to assess
themselves on different algorithms
PYTHON
Javascript
Java - ALL
Prerequisites
Programming Fundamentals
Time complexity
As of 2018, Gmail has more than 1 billion active monthly users. But it is able to verify whether a
given username ( or string ) is amongst those 1 billion entries or not, in a few milliseconds!
It is very important that the code we write takes the least possible time to run. One way to find out
the runtime of a program is to actually run it ( even ignoring the variables of system speed, etc ). But
what if, after writing a big piece of code of about 1000 lines, you find that it takes 1 hour to execute.
All the effort behind the code goes waste. There is also a bigger problem. When you test your code it
may run fast for the small data set you may use. But can you guarentee that your code will run
similarly fast even for a bigger data set of may be a million data? Therefore we need to be able to
determine the runtime of a program at an algorithmic stage itself.
One of the simplest ways to measure time complexity is to assume each operation takes a unit of
time and add up the total operations in the algorithm. However, the time taken by an algorithm
depends also on not just the number of input but also the type of input. For example, if a sorting
algorithm is fed an input of ordered numbers then the algorithm has do no work at all, even if we
supply 1 billion numbers to it. However, the same algorithm may have to do more work even for 5
numbers supplied in unsorted fashion.
Thus time complexity of an algorithm may be best or average or worst based on the type of data
specified. For practical purposes, we will go with the worst case time complexity. In other words,
what is the maximum time this algorithm can take to execute for a given size of an input. Let us take
a look at how the worst case complexity is represented.
Big O Notation
We are not interested in the best case scenario, nor how the algorithm behaves for smaller data size.
We are interested to know how the algorithm behaves in terms of time complexity under worst case
scenario for increasing problem size. This is represented in Big O notation.
Sometimes an algorithm takes the same amount of time no matter what the input size is. For
example, consider the operation of finding an element in the array in a given index position. This
would be constant no matter what the size of the array is. a[100] is going to take the same time, no
matter if the array a is either having 1000 elements or 1 million elements.
In such cases, we say the worst case asymptotic time complexity of the algorithm or Big O is a
constant. This is represented as O(1).
Consider an example where we have a text with 1000 words and we want to find if a particular word
exists or not. We have to look through each and every word before we can conclude whether a given
word occurs in the text or not. Thus as the number of words in the text increase, the time taken to
find the word would also increase.
For example,
if we have three words in a given text and going by worst cases scenario, the word we want
is the last word, then we need to search all the three words.
Thus the time taken increases linearly with the size of the input. Such a time complexity is
represented as O(n)
Thus the time taken increases linearly with the size of the input. Such a time complexity is
represented as O(n)
Calculating Big O
There are some simple steps in calculating Big O. Let us consider the below abstract pseudo code
which has few operations and loops
function check(n):
operation 1..
operation 2..
loop .. 1 to n..
loop .. 1 to n..
loop .. 1 to n..
Assuming each operation takes time T, to calculate the Big O of the above abstract pseudo code, we
need to:
Let us take a look at what these Big O values mean and what to they actually indicate
Big O values indicate how does the given algorithm scale for a given size of input. The common
values and their implications are:
Big O Scalability
1 Excellent
log n Good
n Average
n log n Bad
n2 Very Bad
2n Very Bad
n! Very Bad
Recursion is basically what happens when a function calls itself multiple times to solve a given
problem. Just like loops, we also need to specify when that function should stop calling itself.
Such a condition is what we call as a base condition. For example, consider the below code
which prints "Hello World" repeatedly.
1. function greeting(n) {
3. greeting(n - 1)
4. }
5. greeting(10);
In the above code, the greeting function calls itself multiple times. But we have not defined the
base condition for this and hence this leads to an infinite loop.
1. function greeting(n){
2. if(n==0){
3. return;
4. }
6. greeting(n-1)
7.
8. }
9. greeting(10);
In the above code, when n becomes 0, we return from the function. Let us see how to use
recursion to perform some calculations.
Recursion examples
Let us look at some more examples of recursion to perform some calculations. The below code
calculates the sum of first n numbers:
1. function sum(n){
2. if(n==1){
3. return 1
4. }
5. else{
6. return n+sum(n-1);
7. }
8. }
9. var d=sum(5);
10. console.log(d);
Every recursion can be replaced with an equivalent iterational statement. So, if every recursion
can be replaced with an iteration, then why do we need recursion? Or if recursion is good, why
do we need iteration?
Recursion issues
Interestingly, the below iterative algorithm to calculate the sum of n numbers has the same
complexity , O(n):
1. function sum(n){
2. var sum=0;
3. while(n!=0){
4. sum+=n;
5. n--;
6. }
7. return sum;
8. }
9. var r=sum(500000);
10. console.log(r);
1. function sum(n){
2. return n*(n+1)/2;
3. }
4.
So, do the pros outweigh the cons. Not really. However, there are ways to optimize recursion,
which we will discuss later.
Functional Programming is a very interesting paradigm ( like OOP ) which looks at solving all the
requirements of the program using functions alone. Also, such functions do not mutate data
outside their functions and hence they are also called as “Pure Functions”.
There are many languages which support functional paradigm. Some of them are:
Scala
Python
JavaScript
Java 8+
Functional programming depends a lot on a concept called as “Higher Order Functions”. A higher
order function is simply a function which can either accept another function as a parameter or
return another function as a parameter.
1. function hello(){
2. console.log("Hello");
3. }
4. function greet(callback){
5. callback();
6. }
7. greet(hello);
Here, greet is a Higher Order Function because it takes another function as a parameter, in this
case the hello() function. Also, the functions which are passed as an argument are also called as
“callback” functions.
Lambda expressions
For example, let us take a look at examples of how to write a function which will accept two
parameters and return their product in different languages:
In Javascript:
1. (x,y)=>x*y
In Java:
1. (x,y)->x*y
In Python:
1. lambda (x,y):x*y
Let us take a look at where these lambda functions are used extensively
Lambda expressions are primarily used in the below common higher order functions (HOF):
Foreach – a HOF which will invoke a callback for every value in the collection and also pass the
currently iterated value to the callback
Map – a HOF which will invoke the callback for every value in the collection, and also pass the
currently iterated value to the callback. However, it also returns the modified value and stores it
in a new collection of same size
Filter - a HOF which will invoke the callback for every value in the collection, and also pass the
currently iterated value to the callback. However, it also returns the modified value and stores it
in a new collection of different size
23. }
1.
2. let num_list=[10,20,30,40,50]
3. num_list.forEach(num=>console.log(num));
4. console.log(num_list.map(num=>num*2));
5. console.log(num_list.filter(num=>num>30))
1. num_list=[10,20,30,40,50]
2. print("Map example")
3. print(list(map(lambda x:x*2,num_list)))
4. print("Filter Example")
5. print(list(filter(lambda x:x>30,num_list)));
6. print("Chain Example")
7. print(list(filter(lambda x:x>30,map(lambda x:x*2,num_list))));