0% found this document useful (0 votes)
70 views

Efficient Coding - LEX

The document discusses efficient coding practices and algorithms. It covers topics like time complexity, recursion, functional programming, and advanced algorithms like greedy, divide and conquer, and dynamic programming approaches. The course teaches Python, JavaScript, and Java and has no prerequisites beyond programming fundamentals.

Uploaded by

kishor
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Efficient Coding - LEX

The document discusses efficient coding practices and algorithms. It covers topics like time complexity, recursion, functional programming, and advanced algorithms like greedy, divide and conquer, and dynamic programming approaches. The course teaches Python, JavaScript, and Java and has no prerequisites beyond programming fundamentals.

Uploaded by

kishor
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Efficient Coding

This course will guide you through different coding practices which results in efficient coding.Course
provides the learner some basic concepts of programming such as Time
complexity,Profiling,Recursion,Functional Programming.It also explains advanced algorithms such as
Greedy approach,Divide & Conquer and Dynamic Programming.

What you will learn

Learner will be able to learn different types of algorithms basics as well as basics which includes
Time Complexity, space complexity, Recursion,Functional Programming.Advanced algorithms include
Greedy Approach , Divide and conquer and Dynamic Programming.Learner will also be able to assess
themselves on different algorithms

Skills you will gain

PYTHON

Javascript

Java - ALL

Prerequisites

Programming Fundamentals

Time complexity

As of 2018, Gmail has more than 1 billion active monthly users. But it is able to verify whether a
given username ( or string ) is amongst those 1 billion entries or not, in a few milliseconds!

Without valid gmail account

With valid gmail account

It is very important that the code we write takes the least possible time to run. One way to find out
the runtime of a program is to actually run it ( even ignoring the variables of system speed, etc ). But
what if, after writing a big piece of code of about 1000 lines, you find that it takes 1 hour to execute.
All the effort behind the code goes waste. There is also a bigger problem. When you test your code it
may run fast for the small data set you may use. But can you guarentee that your code will run
similarly fast even for a bigger data set of may be a million data? Therefore we need to be able to
determine the runtime of a program at an algorithmic stage itself. 

Time Complexity is the measure of time taken by an algorithm to run.


Measuring Time complexity

One of the simplest ways to measure time complexity is to assume each operation takes a unit of
time and add up the total operations in the algorithm. However, the time taken by an algorithm
depends also on not just the number of input but also the type of input. For example, if a sorting
algorithm is fed an input of ordered numbers then the algorithm has do no work at all, even if we
supply 1 billion numbers to it. However, the same algorithm may have to do more work even for 5
numbers supplied in unsorted fashion.

Thus time complexity of an algorithm may be best or average or worst based on the type of data
specified. For practical purposes, we will go with the worst case time complexity. In other words,
what is the maximum time this algorithm can take to execute for a given size of an input. Let us take
a look at how the worst case complexity is represented.

Big O Notation

We are not interested in the best case scenario, nor how the algorithm behaves for smaller data size.
We are interested to know how the algorithm behaves in terms of time complexity under worst case
scenario for increasing problem size. This is represented in Big O notation.

Sometimes an algorithm takes the same amount of time no matter what the input size is. For
example, consider the operation of finding an element in the array in a given index position. This
would be constant no matter what the size of the array is. a[100] is going to take the same time, no
matter if the array a is either having 1000 elements or 1 million elements.

In such cases, we say the worst case asymptotic time complexity of the algorithm or Big O is a
constant. This is represented as O(1).

Consider an example where we have a text with 1000 words and we want to find if a particular word
exists or not. We have to look through each and every word before we can conclude whether a given
word occurs in the text or not. Thus as the number of words in the text increase, the time taken to
find the word would also increase.

For example,

 if we have three words in a given text and going by worst cases scenario, the word we want
is the last word, then we need to search all the three words.

 if we have 10 words in a text, we need to search all 10 words.

 if we have n words, we need to search all n words

Thus the time taken increases linearly with the size of the input. Such a time complexity is
represented as O(n)

Thus the time taken increases linearly with the size of the input. Such a time complexity is
represented as O(n)

Calculating Big O
There are some simple steps in calculating Big O. Let us consider the below abstract pseudo code
which has few operations and loops

function check(n):

  operation 1..

  operation 2..

  loop .. 1 to n..

  loop .. 1 to n..

  loop .. 1 to n..

      loop ... 1 to n..

Assuming each operation takes time T, to calculate the Big O of the above abstract pseudo code, we
need to:

 Sum up the total time for all operations; T+T+n*T+n*T+n 2T=2T+2nT+n2T

 Drop the constants= O(n+n2)

 Drop the non-dominant terms = O(n2)

Let us take a look at what these Big O values mean and what to they actually indicate

Drop the non-dominant terms = O(n2)

Analysing Big O values

Big O values indicate how does the given algorithm scale for a given size of input. The common
values and their implications are:

Big O Scalability

1 Excellent

log n Good

n Average

n log n Bad

n2 Very Bad

2n Very Bad

n! Very Bad

Read further here: http://bigocheatsheet.com/

A good way to visualize various Big O values: https://www.desmos.com/calculator/xpfyjl1lbn


Recursion introduction

Recursion is basically what happens when a function calls itself multiple times to solve a given
problem. Just like loops, we also need to specify when that function should stop calling itself.
Such a condition is what we call as a base condition. For example, consider the below code
which prints "Hello World" repeatedly.

1. function greeting(n) {

2. console.log("Hello world ", n)

3. greeting(n - 1)

4. }

5. greeting(10);

In the above code, the greeting function calls itself multiple times. But we have not defined the
base condition for this and hence this leads to an infinite loop.

Let us add a base condition and observe the code:

1. function greeting(n){

2. if(n==0){

3. return;

4. }

5. console.log("Hello world ",n)

6. greeting(n-1)

7.

8. }

9. greeting(10);

In the above code, when n becomes 0, we return from the function. Let us see how to use
recursion to perform some calculations.

Recursion examples

Let us look at some more examples of recursion to perform some calculations. The below code
calculates the sum of first n numbers:

1. function sum(n){
2. if(n==1){
3. return 1
4. }
5. else{
6. return n+sum(n-1);
7. }
8. }
9. var d=sum(5);

10. console.log(d);

Every recursion can be replaced with an equivalent iterational statement. So, if every recursion
can be replaced with an iteration, then why do we need recursion? Or if recursion is good, why
do we need iteration?

Recursion issues

Interestingly, the below iterative algorithm to calculate the sum of n numbers has the same
complexity , O(n):

1. function sum(n){
2. var sum=0;
3. while(n!=0){
4. sum+=n;
5. n--;
6. }
7. return sum;
8. }
9. var r=sum(500000);

10. console.log(r);

Note: The below equivalent code has a time complexity of O(1)

1. function sum(n){
2. return n*(n+1)/2;
3. }

4.

However, recursive algorithms have two specific advantages:

 It makes the code look simple, neat and intutive


 Some algorithms are too tough in an iterative way- for example, creating or traversing a
tree data structure.

So, do the pros outweigh the cons. Not really. However, there are ways to optimize recursion,
which we will discuss later.
 

Learn Memoization approach of solving a problem.

Intro to Functional Programming

Functional Programming is a very interesting paradigm ( like OOP ) which looks at solving all the
requirements of the program using functions alone. Also, such functions do not mutate data
outside their functions and hence they are also called as “Pure Functions”.

There are many languages which support functional paradigm. Some of them are:

 Scala
 Python
 JavaScript
 Java 8+

Let us take a look some of the important concepts behind this

Higher order functions

Functional programming depends a lot on a concept called as “Higher Order Functions”. A higher
order function is simply a function which can either accept another function as a parameter or
return another function as a parameter.

Consider the below JavaScript code for example:

1. function hello(){
2. console.log("Hello");
3. }
4. function greet(callback){
5. callback();
6. }

7. greet(hello);
Here, greet is a Higher Order Function because it takes another function as a parameter, in this
case the hello() function. Also, the functions which are passed as an argument are also called as
“callback” functions.

Lambda expressions

Lambda expressions are essentially shorter way to write functions.

For example, let us take a look at examples of how to write a function which will accept two
parameters and return their product in different languages:

In Javascript:

1. (x,y)=>x*y

In Java:

1. (x,y)->x*y

In Python:

1. lambda (x,y):x*y

The common observations are:

 These functions don’t have a name


 These functions don’t need indentation or { } to indicate function body for single line
functions
 These functions don’t need return statement for single line functions. The expression is
automatically returned

Let us take a look at where these lambda functions are used extensively

Lambda expressions are primarily used in the below common higher order functions (HOF):

Foreach – a HOF which will invoke a callback for every value in the collection and also pass the
currently iterated value to the callback

Map – a HOF which will invoke the callback for every value in the collection, and also pass the
currently iterated value to the callback. However, it also returns the modified value and stores it
in a new collection of same size

Filter - a HOF which will invoke the callback for every value in the collection, and also pass the
currently iterated value to the callback. However, it also returns the modified value and stores it
in a new collection of different size

Let us look at examples of each of these in different languages

Functional Programming in Java


Let us have a look on code which uses Functional Programming in Java

1. public class FunctionalExamples{


2. public static void main(String[] args){
3. List<Integer> numList=Arrays.asList(10,20,30,40,50);
4. System.out.println("===For Each===");
5. numList.stream().forEach(num->System.out.println(num));
6.
7. System.out.println("===For Each Alternative===");
8. numList.stream().forEach(num->System.out::println(num));
9.
10. System.out.println("==Map===");
11. List<Integer> newList=numList.stream().map(n-
>n*2).collect(Collectors.toList());
12. System.out.println(newList);
13.
14. System.out.println("==Filter===");
15. List<Integer>filteredList=numList.stream().filter(n-
>n>30).collect(Collectors.toList());
16. System.out.println(filteredList);
17.
18. System.out.println("==Chaining===");
19. List<Integer>chainedList=numList.stream().map(n-
>n*2).filter(n->n>30).collect(Collectors.toList());
20. System.out.println(chainedList);
21.
22. }

23. }

Functional Programming in JavaScript

The below code is an example of Functional programming in JavaScript

1.
2. let num_list=[10,20,30,40,50]
3. num_list.forEach(num=>console.log(num));
4. console.log(num_list.map(num=>num*2));

5. console.log(num_list.filter(num=>num>30))

Functional Programming in Python

The below example is of Functional programming in Python 

1. num_list=[10,20,30,40,50]
2. print("Map example")
3. print(list(map(lambda x:x*2,num_list)))
4. print("Filter Example")
5. print(list(filter(lambda x:x>30,num_list)));
6. print("Chain Example")
7. print(list(filter(lambda x:x>30,map(lambda x:x*2,num_list))));

You might also like