Algorithms are a set of instructions that helps us to get the expected output. To judge the efficiency of an algorithm, we need an analyzing tool.

There are two techniques to measure the efficiency of an algorithm.

**Time Complexity:**Time complexity is the time taken by an algorithm to execute.**Space Complexity:**Space complexity is the amount of memory used by an algorithm while executing.

The analysis of algorithms is to find the complexity of algorithms in terms of time and storage. In theoretical analysis, the analysis is performed based on the description of the algorithm, or the operation of the program or function.

The theoretical analysis takes all the possible input into consideration and assumes that the time taken for executing the basic operation is constant.

The primitive or basic operations that take constant time are given as follows.

- Declarations
- Assignments
- Arithmetic Operations
- Comparison statements
- Calling functions
- Return statement

Let us consider the following code.

```
i =0 #declaration statement
count = 0 #declaration statement
while(i < N): #comparison statement
count +=1 #arithmetic operation
```

Operations | Frequency |

Declaration statement | 2 |

Comparison statement | N |

Arithmetic operations | N |

Since the arithmetic operation is within the while loop, it will be executed for N number of times. Now the total operations performed is 2 + N + N = 2 + 2N. When the value of N is large, then the constant values don’t make any difference and are insignificant. Hence, we ignore the constant values.

**Table of Contents**

### Best, Average, and Worst cases

An algorithm can perform differently based on the given input. There are three cases to analyze an algorithm. They are **worst case, average case, and the best case.**

Let us consider the linear search algorithm. In linear search algorithm, we search for an element sequentially. In the following example, we perform linear search on the list **list_1 = [4, 6, 7, 1, 5, 2].**

```
def linear_search(list_1, key):
for i in range(0, len(list_1)):
if key == list_1[i]:
print(key, "is at index", i)
list_1 = [4, 6, 7, 1, 5, 2]
linear_search(list_1, 6)
#Output: 6 is at index 1
```

**Best case:** The** best case** is the minimum time required by an algorithm for the execution of a program. If we are searching for an element in a list, and it is present at the **0 ^{th} index,** then the number of comparisons to be performed is 1 time. The time complexity will be O(1) i.e. constant which is the best case time.

**Worst case: **The** worst case** is the maximum time required by an algorithm for the execution of a program. If we are searching for an element in the** **list of length **“n”,** and it’s not present in the list or is present at the ending index, then the number of comparisons to be performed will be** “n” **times.

Therefore, the time complexity will be O(n), which is the worst case time.

**Average case:** In average case analysis, we take the average time of all the possible inputs. The average case time is given by,

`Average case time = All possible cases time / Number of cases`

In case, if we are searching for an element, and it is present at the first location, then it will take 1 unit of time. If we are searching for an element, and it is present at the second location, then it will take 2 units of time. Similarly, if we are searching for an element present at the nth location, then it will take the **“n”** units of time.

The total number of possible cases will be** “n”.** Mathematically, we can write

```
Average time = (1 + 2 + 3... + n)/ n
= (n(n + 1))/(2n)
= (n + 1)/2
Assuming "n" is very large, we ignore the constant terms.
Thus, the average case time can be written as
Average time = n
```

The best, worst and average case time for linear search algorithm is

```
Best case B(n) = O(1)
Worst case W(n) = O(n)
Average case A(n) = O(n)
```

### Big O, Omega, and Theta Notations

**Big O, Omega, and theta notations** are also called **asymptotic notations.** The asymptotic analysis uses mathematical tools to measure the efficiency of an algorithm.

For example, consider the time complexity of an algorithm is **T(n) = n ^{2} + 2n + 5**. For large values of

**n**, the part

**2n + 5**is insignificant compared to the n

^{2}part.

In asymptotic notations, we should be just concerned about how the function grows as the input(n) will grow, and it completely depends on **n ^{2}** for the time

**T(n)**. Therefore, as per the asymptotic analysis, we ignore the insignificant part and constants of an expression.

**Big O**

The big O notation measures the upper bound on the running time of an algorithm. Big O time complexity describes the worst case. Consider a function **f(n).** We choose another function** g(n)** such that** f(n) <= c.g(n), for n > n_{0} and c > 0.** Here,

**c**and

**n**represent constant values. If the equation is satisfied,

_{0}**f(n) = O(g(n)).**

**Big Omega**

The big Omega measures the lower bound on the running time of an algorithm. Big Omega time complexity describes the best case. Consider a function **f(n).** We choose another function** g(n)** such that** c.g(n) <= f(n), for n > n_{0} and c > 0.** Here,

**c**and

**n**represent constant. If the equation is satisfied,

_{0}**f(n) = Omega(g(n)).**

**Big Theta**

The big Theta measures the time between the upper and the lower bound of an algorithm. Big Theta describes the time complexity within the bounds of the best and worst case.

Consider a function **f(n).** We choose another function** g(n)** such that** c1.g(n) <= f(n) <= c2.g(n), for n > n_{0} and c1, c2 > 0.** Here,

**c**1,

**c2**, and

**n**represent constant values. If the equation is satisfied,

_{0}**f(n) = Theta(g(n)).**

We can analyze the efficiency of an algorithm based on its performance as the size of input grows. The time complexity of an algorithm is commonly expressed in Big O notation.

We use the worst-case time complexity because it ensures that the running time of an algorithm will not exceed the worst-case time. The performance classification is given as follows.

Constant | O(1) |

Logarithm | O(log N) |

Linear | O(N) |

Quasilinear | O(N log N) |

Quadratic | O(N^{2}) |

Cubic | O(N^{3}) |

Exponential | O(2^{N}) |

### Graphical representation of different Time Complexities

### Time Complexity Examples

### O(1) – Constant Time Complexity

For the **constant time complexity,** the running time of an algorithm doesn’t change and remains constant irrespective of the size of the input data. Consider a list **list_1 = [4, 6, 7, 1, 5, 2]**. Now, accessing a specific element using indexing takes a constant amount of time.

```
list_1 = [4, 6, 7, 1, 5, 2]
print(list_1[4]) #accessing element at the 4th index.
#Output: 5
```

### O(log N) – Logarithm Time Complexity

In **logarithmic time complexity,** the running time of an algorithm is proportional to the logarithm of input size. For example, the binary search algorithm takes log(N) time.

Consider a sorted list** list_1 = [1, 2, 3, 6, 7].** We use binary search to find the key element 6. The binary search algorithm divides the input size in half in each iteration. Therefore, the time complexity of the algorithm reduces to O(log N).

```
def binary_search(list_1, low, high, key):
while(low <= high):
mid = low + (high - low)//2
if list_1[mid] == key:
return mid
break
elif list_1[mid] < key:
low = mid + 1
else:
high = mid - 1
return "Not Found"
list_1 = [1, 2, 3, 6, 7]
low = 0
high = len(list_1) - 1
key = 6
ind = binary_search(list_1, low, high, key)
print("Element 6 is present at index: ", ind)
#Output: Element 6 is present at index: 3
```

### O(N) – Linear Time Complexity

In **linear time complexity, **the running time of an algorithm increases linearly with an increase in the size of the input data. For example, adding all the elements present in a list is proportional to the length of the list.

Consider a list **list_1 = [4, 6, 7, 1, 5, 2]**. In the following code, we initialize a variable add with 0. A for loop iterates over the **list_1 **and add all the elements to the variable **add.**

```
add = 0
list_1 = [4, 6, 7, 1, 5, 2]
for i in range(0, len(list_1)):
add += list_1[i]
print(add)
#Output: 25
```

**O(N**^{2}) – Quadratic Time Complexity

^{2}) – Quadratic Time Complexity

In **quadratic time complexity,** the running time of an algorithm is proportional to the square of the input size. For example, consider a matrix of size N x N. Summation of all the elements of the matrix will take **O(N ^{2})** time.

Consider a matrix** mat_1 = [[1, 2, 3], [1, 1, 1], [5, 7, 8]]**. We initialize a variable** add** with 0 and use a nested for loop to traverse each element of the matrix. In each iteration, we add the element to the variable** add.**

```
mat_1 = [[1, 2, 3], [1, 1, 1], [5, 7, 8]]
add = 0
for i in range(len(mat_1)):
for j in range(len(mat_1[0])):
add += mat_1[i][j]
print(add)
#Ouput: 29
```

**O(2**^{N}) – Exponential Time Complexity

^{N}) – Exponential Time Complexity

In **exponential time complexity,** the running time of an algorithm doubles with the addition of each input data. For example, getting the Fibonacci number using recursion takes exponential time.

```
def fib(n):
if n < 0 or int(n) != n:
return "Not defined"
elif n == 0 or n == 1 :
return n
else:
return fib(n-1) + fib(n-2)
print(fib(4))#prints Fibonacci of 4th number in series
#Output: 3
```

### Space Complexity

Space complexity is the memory consumed by an algorithm while executing. We can express the space complexity in terms of big O notation such as O(1), O(N), O(N log N), O(N^{2}), etc.

For example, the space complexity for a list of length N will be O(N) and the space complexity for a matrix of size N x N will be O(N^{2}).

In recursive algorithms, the stack space is also taken into account. For example, consider the following program. In each iteration, we call the function **my_func()** recursively, and each recursive call adds a layer to the stack memory. Hence, the space complexity is **O(N)**.

```
def my_ func(num):
if num > 0:
my_func(num - 1)
print(num, end = " ")
my_func(5)
```

However, N calls don’t mean that the space complexity will be O(N). Let us consider two functions,** func_1** and **func_2.** In the function **func_1,** we call the function **func_2** for N number of times. But, all these calls will not be added to the stack memory simultaneously. Thus, we need only **O(1) **space.

```
def func_1(n):
for i in range(n):
add = func_2(i)
print(add)
def func_2(x):
return x + 1
```

### Adding vs Multiplying Time Complexities

Let us look at the following example. There are two sets of **for loops**(loop A and loop B) executing for a range of M and N respectively.

In Example 1, loop B starts executing after the execution of loop A is completed. Hence, the time complexity will be O(M + N). In Example 2, for each iteration of loop A, loop B is executed. Hence, the time complexity will be O(M * N).

```
#Example 1
for i in range(M): #loop A
print(i)
for j in range(N): #loop B
print(j)
#Example 2
for i in range(M): # loop A
for j in range(N): #loop B
print(i, j)
```

### Time Complexity of Recursive Calls

Consider the following example. We declare a user-defined function **my_func()** that takes a number **“n”** as a parameter and calls itself recursively.

```
def my_func(n): ----------->T(n)
if n > 0: ----------->O(1)
print(n) ----------->O(1)
my_func(n-1) ----------->T(n-1)
my_func(5)
```

Let us assume that the total time taken by **my_func()** is **T(n).** Hence, we can say that the recursive statement** my_func(n-1) **will take** T(n-1)** time. The other basic operations and statements take 1 unit of time.

```
Now, we can write
T(n) = O(1) + O(1) + T(n-1)
T(n) = T(n-1) + O(1)
#Taking constant time as 1
T(n) = T(n-1) + 1
```

The statements inside the if block will be executed, if n>0, and the time will be T(n-1) +1. If n=0, then only the conditional statement will be tested and the time will be 1 unit. Thus, the recurrence relation for T(n) is given as

```
T(n) = T(n-1) + 1, if n > 0
= 1 , if n = 0
Now, T(n) = T(n-1) +1 -----------------(1)
Similarly, we can write
T(n-1) = T(n-2) + 1 -----------------(2)
T(n-2) = T(n-3) + 1 -----------------(3)
Substituting (2) in (1), we get
T(n) = T(n-2) + 2 ------------------(4)
Substituting (3) in (4), we get
T(n) = T(n-3) + 3 ------------------(5)
If we continue this for k times, then
T(n) = T(n-k) + k -----------------(6)
We know that T(0) = 1,
so, we assume n - k = 0
Thus, k = n
By substituting the value of k in (6) we get
T(n) = T(n-n) + n
T(n) = T(0) + n
T(n) = 1 + n
Hence, we can say T(n) will be taking O(n) time
```

### Measuring the Recursive Algorithm that takes Multiple Calls

Consider the example given below. We define a function **f(n) **that takes a number** n** as input and calls the function recursively 2 times.

```
def f(n):
if n <= 1:
return 1
return f(n-1) + f(n-2)
f(4)
```

Consider each recursive call as a node of a binary tree.

- At level 0 we have 1 node, this can be expressed as 2
^{0}`.`

- At level 1 we have 2 nodes, this can be expressed as 2
^{1}`.`

- At level 2 we have 4 nodes, this can be expressed as 2
^{2}`.`

- At level 3 we have 8 nodes, this can be expressed as 2
^{3}`.`

We know that, **2 ^{0} + 2^{1} + 2^{2} + 2^{3} … + 2^{n} = 2^{(n+1) }-1**. Hence, the time complexity will be

**2**ignoring the insignificant and constant terms.

^{n}When multiple recursive calls are made, we can represent time complexity as **O(branches ^{depth}).** Here, branches represents number of children for each node i.e. number of recursive call in each iteration and depth represents parameter in the recursive function.

^{ }

### Big O Interview Questions

### Sum and Product

What is the runtime of the following code?

```
def func(array):
sum = 0
product = 1
for i in array:
sum += i
for i in array:
product *= i
print("Sum = "+ str(sum), "Product = "+ str(product))
```

Solution:

```
def func(array):
sum = 0 ------------------------------------------------>1
product = 1 --------------------------------------------->1
for i in array: --------------------------------------------->N
sum += i ---------------------------------------------->1 * N = N
for i in array: --------------------------------------------->N
product *= i ------------------------------------------->1 * N = N
print("Sum = " + str(sum), "Product = "+ str(product)) --->1
```

The statement **sum += i** and **product *= i** are within the for loop, Therefore they will be executed **N** times. Hence, the time complexity = 1 + 1 + N + N + N + N + 1 = 4N +3 = O(N).

```
Time complexity = 1 + 1 + N + N + N + N + 1
= 4N +3
= O(N).
```

### Print Pairs

What is the runtime of the below code?

```
def print_pairs(array):
for i in array:
for j in array:
print(str(i) + " " + str(j))
```

Solution:

```
def print_pairs(array):
for i in array: -------------------------N
for j in array: ---------------------N * N = N^2
print(str(i) + " " + str(j)) ------------1 * N * N = N^2
```

Time Complexity = N +2N^{2} = O(N^{2})

### Print Unordered Pairs

What is the runtime of the below code?

```
def printUnordered(array):
for i in range(0, len(array)):
for j in range(i+1, len(array)):
print(array[i], array[j])
```

**Solution:**

The outer loop executes for N times.

The inner loop in

- 1st Iteration -> (N-1) times
- 2nd iteration -> (N-2) times
- This will continue, till the inner loop executes for 1 time

Hence, the total time for inner loop is

```
T(n) = (N-1) + (N-2) ... 3 + 2 + 1
T(n) = 1+ 2 + 3 ... + (N-2) + (N-1)
T(n) = (N*(N-1))/2
Ignoring the constants we get,
T(n) = N^2
```

The total time complexity = N + N^{2} = O(N^{2})

### Print Unordered Pairs of two different Arrays

What is the runtime of the below code?

Code 1:

```
def printUnordered(arr_A, arr_B):
for i in range(len(arr_A)):
for j in range(len(arr_B)):
if arr_A[i] < arr_B[j]:
print(arr_A[i], arr_B[j])
```

Solution: Assuming, the** length of arr_A = M** and the **length of arr_B = N**. Then, the outer **for loop** will be executed for **M times** and the inner **for loop** will be executed for** M*N times.** Hence, the** time complexity = M + (M*N) = O(M*N).**

Code 2:

```
def printUnordered(arr_A, arr_B):
for i in range(len(arr_A)):
for j in range(len(arr_B):
for k in range(0, 100000):
print(arr_A[i], arr_B[j])
```

Solution: Assuming, the** length of arr_A = M** and the **length of arr_B = N**. Then, the first **for loop** will be executed for **M times** and the second **for loop** will be executed for** M*N times.** The third for loop takes **10000 * M * N** units of time.

**Time complexity = M + (M*N) + (10000 * M * N) = O(M*N).**

### Reversing an array

What is the runtime of the below code?

```
def reverse(array):
for i in range(len(array)//2):
other = len(array)- i - 1
temp = array[i]
array[i] = array[other]
array[other] = temp
print(array)
```

Solution:

```
def reverse(array):
for i in range(len(array)//2): -------- N/2
other = len(array)- i - 1 -------- N/2
temp = array[i] ------------------N/2
array[i] = array[other] ------------N/2
array[other] = temp ------------ N/2
print(array)-------------------------------1
```

The for loop executes for (N/2) times. Hence, all the statements inside the for loop will also execute for N/2 times. In the end, the print statement takes 1 unit of time.

**Time complexity = N/2 + N/2 + N/2 + N/2 + N/2 + 1 = O(N).**

### Equivalents of O(N)

Which of the following are equivalents of O(N) and why?

- O(N + P), where P < N/2
- O(2N)
- O(N + logN)
- O(N + NlogN)
- O(N + M)

Solution:

- In the time complexity O(N + P), P is less than N/2. Here, N is the dominant term. So, we can write time complexity O(N + P) as O(N). Thus, O(N + P) is an equivalent of O(N).
- As per asymptotic notations, we ignore the constant values and insignificant terms. So, we can write O(2N) as O(N). Hence, O(2N) is an equivalent of O(N).
- We know that logN < N. Here, N is the dominant term. So, we can write time complexity (O + logN) as O(N). Thus, O(N + log N) is an equivalent of O(N).
- In the time complexity O(N + NlogN), the NlogN > N. Here, NlogN is the dominant term. Hence, we cannot represent the time O(N + NlogN) as an equivalent of O(N).
- In the time complexity O(N + M), there is no relationship between M and N. Therefore, we need to keep both variables. Thus, we cannot represent O(N + M) as an equivalent of O(N).

### Factorial Complexity

What is the runtime of below code?

```
def factorial(n):
if n < 0:
return -1
elif n==0:
return 1
else:
return n * factorial(n-1)
```

Solution:

```
def factorial(n): ------------------->T(n)
if n < 0:
return -1 -------------------->O(1)
elif n==0:
return 1 --------------------->O(1)
else:
return n * factorial(n-1) ----->T(n-1)
```

We can write the recurrence relation for below code as,

```
T(n) = T(n-1) + 1, if n > 0
= 1 , if n = 0
Now, T(n) = T(n-1) +1 -----------------(1)
Similarly, we can write
T(n-1) = T(n-2) + 1 -----------------(2)
T(n-2) = T(n-3) + 1 -----------------(3)
Substituting (2) in (1), we get
T(n) = T(n-2) + 2 ------------------(4)
Substituting (3) in (4), we get
T(n) = T(n-3) + 3 ------------------(5)
If we continue this for k times, then
T(n) = T(n-k) + k ----------------(6)
WKT, T(0) = 1
Assume, n-k =0
Hence, n =k
We can write T(n) as,
T(n) = 1 + n
T(n) = O(n)
```

The time complexity of calculating factorial using recursion is O(n).

### Fibonacci Complexity

What is the runtime of below code?

```
def allFib(n):
for i in range(n):
print(str(i) + ":" + str(fib(i)))
def fib(n):
if n <= 0:
return 0
elif n == 1:
return 1
return fib(n-1) + fib(n-2)
```

**Solution: In the above code, the first function allFib() prints the Fibonacci for each number,** and the second method fib() calculates the fib for a number.

We know that we can measure the time complexity of multiple recursive calls by O(branches^{depth}). Here, branches represent the number of children for each node i.e. number of recursive calls in each iteration, and depth represents the parameter in the recursive function.

**The time taken by fib(n) is O(2 ^{n}) as the function fib(n) is making 2 recursive call and the parameter is n.**

Now, the function allFib(n) calls the function fib(n) in each iteration.

- In 1st iteration, the time complexity for calling fib(0) is 20.
- In 2nd iteration, the time complexity for calling fib(1) is 2
^{1}. - Similarly, for the last iteration time complexity for calling fib(n-1) is 2
^{(n-1)}.

The total time complexity will be 2^{0} + 2^{1} + 2^{2} + … + 2^{(n-1)} = 2^{n} -1 = O(2^{n}).

### Powers of 2

What is the runtime of below code?

```
def powersof2(n):
if n < 1:
return 0
elif n==1:
return 1
else:
prev = powersof2(int(n/2))
curr = prevI*2
print(curr)
return curr
```

**Solution: **

```
def powersof2(n):
if n < 1:
return 0 ------------------------O(1)
elif n==1:
return 1 ------------------------O(1)
else:
prev = powersof2(int(n/2))
curr = prevI*2 -----------------O(1)
print(curr) --------------------O(1)
return curr --------------------O(1)
```

In each recursive call, we reduced the input by half. Hence, the time complexity will be O(logN).