Home » Big O Notation in C

Big O Notation in C

by Online Tutorials Library

Big O Notation in C

In Data Structure and Algorithms in C programming, we have learned so many algorithms where we have understood different aspects and purposes of an algorithm. We have also studied the complexity of an algorithm and how to analyze and calculate an algorithm’s complexity. We have found the time and space complexity of an algorithm and concluded that the algorithm that has less time and space complexity is evaluated as the best algorithm. We understood how to find the best case, worst case, and the average case of an algorithm. Thus, for analyzing all such complexities and representing them, the concept of Asymptotic Notation is used under which there are different types available for representing the complexities. One such type is Big O Notation.

In this section, we will discuss the Big O notations and briefly introduce Asymptotic notations and its types.

What are Asymptotic Notations

These are the mathematical notations that are used for the asymptotic analysis of the algorithms. The term ‘asymptotic’ describes an expression where a variable exists whose value tends to infinity. In short, it is a method that describes the limiting behavior of an expression. Thus, using asymptotic notations, we analyze the complexities of an algorithm and its performance. Using the asymptotic notations, we determine and show the complexities after analyzing it. Therefore, there are three types of asymptotic notations through which we can analyze the complexities of the algorithms:

Big O Notation in C

  • Big O Notation (O): It represents the upper bound of the runtime of an algorithm. Big O Notation’s role is to calculate the longest time an algorithm can take for its execution, i.e., it is used for calculating the worst-case time complexity of an algorithm.
  • Omega Notation (Ω(n)): It represents the lower bound of the runtime of an algorithm. It is used for calculating the best time an algorithm can take to complete its execution, i.e., it is used for measuring the best case time complexity of an algorithm.
  • Theta Notation (Θ(n)): It carries the middle characteristics of both Big O and Omega notations as it represents the lower and upper bound of an algorithm.

So, these three asymptotic notations are the most used notations, but other than these, there are more common asymptotic notations also present, such as linear, logarithmic, cubic, and many more.

Big O Notation

The Big O notation is used to express the upper bound of the runtime of an algorithm and thus measure the worst-case time complexity of an algorithm. It analyses and calculates the time and amount of memory required for the execution of an algorithm for an input value.

Mathematically,

For a function, f(n) and another function g(n), where both functions are defined on some unbounded set of real (positive) numbers.

Where g(n) is strictly positive for all large values of n. It can be written as:

f(n) = O(g(n)) where n tends to infinity (n → ∞)

But it is seen that the assumption of n to infinity is left unstated, and so we can simply write the above expression as:

f(n) = O(g(n))

Here, f and g are the necessary functions from positive integer to non-negative real numbers.

Thus, the Big O asymptotic refers to large n values.

Properties of Big O Notation

Certain essential properties of Big O Notation are discussed below:

  • Constant Multiplication:
    If f(n) = c.g(n), then O(f(n)) = O(g(n)) where c is a nonzero constant.
  • Summation Function:
    If f(n) = f1(n) + f2(n) + — + fm(n) and fi(n)≤ fi+1(n) ∀ i=1, 2,–, m,
    then O(f(n)) = O(max(f1(n), f2(n), –, fm(n))).
  • Polynomial Function:
    If f(n) = a0 + a1.n + a2.n2 + — + am.nm,
    then O(f(n)) = O(nm).
  • Logarithmic Function:
    If f(n) = logan and g(n)=logbn,
    then O(f(n))=O(g(n))

Here, in terms of Big O, every log functions increase in the same manner.

How does Big O Notation make runtime analysis of an algorithm

For analyzing an algorithm’s performance, we used to calculate and compare the worst-case running time complexities of the algorithm. The order of O(1), which is known as the Constant Running Time, is considered to be the fastest running time for an algorithm where the time taken by the algorithm is the same for different input sizes. However, the constant running time is the ideal runtime for an algorithm, but it is achieved very rarely. It is because the runtime of an algorithm depends on the input size of n.

For example:

As we know that the runtime performance of an algorithm depends on the input size of n. Let’s see some mathematical examples for making the runtime analysis of an algorithm for different size of n:

  • n = 20
    log (20) = 2.996;
    20 = 20;
    20 log (20) = 59.9;
    202 = 400;
    220 = 1084576;
    20! = 2.432902 + 1818;
  • n = 10
    log (10) = 1;
    10 = 10;
    10 log (10) = 10;
    102 = 100;
    210 = 1024;
    10! = 3628800;

Thus, similarly, we calculate the runtime performance of an algorithm. Let’s see some algorithmic examples and see the runtime analysis of those algorithms:

  • For Linear Search, the runtime complexity is O(n).
  • For binary search, the runtime complexity is O(log n).
  • For Bubble Sort, Selection Sort, Insertion Sort, Bucket Sort, the runtime complexity is O(n^c).
  • For Exponential algorithms such as Tower of Hanoi, the runtime complexity is O(c^n).
  • For Heap Sort, Merge SortSort, the runtime complexity is O(n log n).

How does Big O notation analyze the Space complexity

It is essential to determine both runtime and space complexity for an algorithm. It’s because on analyzing the runtime performance of the algorithm, we get to know the execution time the algorithm is taking, and on analyzing the space complexity of the algorithm, we get to know the memory space the algorithm is occupying. Thus, for measuring the space complexity of an algorithm, it is required to compare the worst-case space complexities performance of the algorithm.

In order to determine the space complexity of an algorithm, the following two tasks are necessary to be done:

Task 1: Implementation of the program for a particular algorithm is required.

Task 2: The size of the input n is required to know the memory each item will hold.

Both these are two important tasks to be accomplished first then only we can calculate the space complexity for an algorithm.

Examples of Algorithms

Below we have mentioned some algorithmic examples with their space complexities:

  • For Linear Search, Bubble sort, selection sort, Heap sort, Insertion sort, and Binary Search, the space complexity is O(1).
  • For radix sort, the space complexity is O(n+k).
  • For quick SortSort, the space complexity is O(n).
  • For merge sort, the space complexity is O(log n).

Example of Big O Notation in C

Below we have implemented the selection sort algorithm in C and calculated the worst-case complexity (Big O notation) of the algorithm:

In order to analyze the algorithm:

  • We can see that the range of the for outer loop is i < n, which means the order of the loop is O(n).
  • Next, for the inner for loop, it is also O(n) as j < n.
  • The average efficiency is found n/2 for a constant c, but we ignore the constant. So, the order is O(n).
  • On multiplying the order of the inner and outer loop, we get the runtime complexity as O(n^2).

You can implement other algorithms in C, analyze it and determine the complexities in a similar way.


You may also like