Big o notation in data structure: Everything to know
Updated on Sep 26, 2022 | 7 min read | 7.1k views
Share:
For working professionals
For fresh graduates
More
Updated on Sep 26, 2022 | 7 min read | 7.1k views
Share:
Table of Contents
Big O Notation in a data structure is used for determining the efficiency of an algorithm, the amount of time it takes to run the function with the growth of the input, and how well the function scales. Measuring this efficiency can be divided into two parts, namely, space complexity and time complexity.
Big O notation refers to the mathematical notation that acts as a limiting factor of any function when an argument is more prone to lean towards a specific value or infinity. It belongs to the category of mathematical notations invented by Edmund Landau, Paul Bachmann, and others. Hence, it is collectively termed the Bachmann–Landau notation or the asymptotic notation.
As per the mathematical deduction, two functions, f(n) and g(n) are defined on a set of positive or real numbers that are not bound. Here, g(n) is strictly positive for every big value of n. It can be written in the following fashion:
f(n) = O(g(n)) in which n tends to infinity (n → ∞)
However, here, the supposition of n to infinity is not exclusively defined, and the above expression can therefore be written as:
f(n) = O(g(n))
Here, f and g are the necessary functions that start from positive integers to real numbers that aren’t non-negative.
Hence, large n values are denoted by the Big O asymptotic.
The Big O algorithm in data structure has quite a few mandatorily required properties. The said essential properties of the Big O Notation are as follows:
Learn Software Development Courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs or Masters Programs to fast-track your career.
Here, while addressing Big O, every single log function increases similarly.
The complexities of the worst-case running time of the algorithm are used to draw comparisons and calculate, especially in the case of analyzing the performance of an algorithm. The order of O(1), depicted as the Constant Running Time, is the algorithm’s fastest running time – the time that the algorithm takes is the same for various input sizes. It is important to note that the ideal runtime of an algorithm is the constant running time, which is very rarely achieved because the algorithm’s runtime depends on the input size of n.
For example:
As mentioned above, an algorithm’s runtime performance is majorly dependent on the input size of n. Let us elucidate this fact with a few mathematical examples to make the runtime analysis of an algorithm for various sizes of n:
The runtime performance of an algorithm is calculated similarly.
Here are a few other algorithmic examples of runtime analysis –
Determining both space and runtime complexity for an algorithm is an essential step. This is because we can determine the execution time that an algorithm takes by analyzing the runtime performance of the algorithm and the memory space the algorithm is taking through the analysis of the space complexity of the algorithm. Therefore, to measure the space complexity of an algorithm, we must compare the worst-case space complexity performance of the algorithm.
For determining the space complexity of an algorithm, we must follow these two tasks –
Task 1: It is vital to implement the program for a particular algorithm.
Task 2: It is essential to know the size of the input n to determine the memory each item will hold.
These two essential tasks require to be accomplished before calculating the space complexity for an algorithm.
There are many examples of algorithms with space complexity, some of which have been mentioned below for a better understanding of this type of algorithm:
It is a fact that Big O notation is primarily used in Computer Science for determining the complexity or performance of an algorithm. This notation provides us with the ability to classify the behavior of algorithms based on the growth of the memory space or execution time requirements when the extent of the input data becomes large. It is not designed to predict the actual memory usage or execution time but for comparing algorithms and then selecting the best amongst them for the job. It is not language-specific but is also implemented in C.
Below, you will find the selection sort algorithm in C where the worst-case complexity (Big O notation) of the algorithm has been calculated:-
for(int i=0; i<n; i++)
{
int min = i;
for(int j=i; j<n; j++)
{
if(array[j]<array[min])
min=j;
}
int temp = array[i];
array[i] = array[min];
array[min] = temp;
}
To analyze the algorithm:
Other algorithms in C can be easily implemented, where the complexities can be easily analyzed and determined similarly.
There are two main areas where Big O Notation is applied:-
However, in both applications, the function g(x) appearing within the O(·) is often chosen to be possibly the most simple if lower order terms and constant factors are omitted.
There are two other usages of this notation that are formally close but relatively different. They are:-
However, this distinction is not in principle, in application only with the formal definition for the “Big O” being the exact same for both cases. The only difference is the limits for the argument of the function.
In conclusion, we can say that Big Data plays an integral role in data structures, and having in-depth, comprehensive knowledge about Big O notation is an excellent skill set to possess. It is in high demand in the job sector and can be potentially a great choice for a career path. upGrad’s Advanced Certificate Programme in Big Data will give you the leverage you need to boost your career. It will introduce you to top professional skills like Data Processing with PySpark, Data Warehousing, MapReduce, Big Data Processing on the AWS Cloud, Real-time Processing, etc.
Get Free Consultation
By submitting, I accept the T&C and
Privacy Policy
Start Your Career in Data Science Today
Top Resources