Big O Notation in Data Structures
Asymptotic analysis is the study of how the algorithm’s performance changes when the order of the input size changes. We employ big-notation to asymptotically confine the expansion of a running time to within constant factors above and below. The amount of time, storage, and other resources required to perform an algorithm determine its efficiency. Asymptotic notations are used to determine the efficiency. For different types of inputs, an algorithm’s performance may vary. The performance will fluctuate as the input size grows larger.
When the input tends towards a certain value or a limiting value, asymptotic notations are used to represent how long an algorithm takes to execute. When the input array is already sorted, for example, the time spent by the method is linear, which is the best scenario.
However, when the input array is in reverse order, the method takes the longest (quadratic) time to sort the items, which is the worst-case scenario. It takes average time when the input array is not sorted or in reverse order. Asymptotic notations are used to represent these durations.
Big O notation classifies functions based on their growth rates: several functions with the same growth rate can be written using the same O notation. The symbol O is utilized since a function’s development rate is also known as the order of the function. A large O notation description of a function generally only offers an upper constraint on the function’s development rate.
It would be convenient to have a form of asymptotic notation that means “the running time grows at most this much, but it could grow more slowly.” We use “big-O” notation for just such occasions.
Advantages of Big O Notation
- When examining the efficiency of an algorithm using run-time inputs, asymptotic analysis is quite useful. Otherwise, if we do it manually with passing test cases for various inputs, performance may vary as the algorithm’s input changes.
- When the algorithm is executed on multiple computers, its performance varies. As a result, we pick an algorithm whose performance does not change much as the number of inputs increases. As a result, a mathematical representation provides a clear understanding of the top and lower boundaries of an algorithm’s run-time.
Now let us have a deeper look at the Big O notation of various examples:
This function runs in O(1) time (or “constant time”) relative to its input. The input array could be 1 item or 1,000 items, but this function would still just require one step.
This function runs in O(n) time (or “linear time”), where n is the number of items in the array. If the array has 10 items, we have to print 10 times. If it has 1000 items, we have to print 1000 times.
Here we’re nesting two loops. If our array has n items, our outer loop runs n times, and our inner loop runs n times for each iteration of the outer loop, giving us n^2 total prints. If the array has 10 items, we have to print 100 times. If it has 1000 items, we have to print 1000000 times. Thus this function runs in O(n^2) time (or “quadratic time”).
An example of an O(2^n) function is the recursive calculation of Fibonacci numbers. O(2^n) denotes an algorithm whose growth doubles with each addition to the input data set. The growth curve of an O(2^n) function is exponential – starting off very shallow, then rising meteorically.
So, in this article, we understood what Big O Notation in Data Structures is and how we can use it in our daily practices to understand the time complexity of our routine deliverables.