View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All
View All

Understanding Binary Search Time Complexity: Explore All Cases

By Pavan Vadapalli

Updated on Apr 03, 2025 | 20 min read | 26.3k views

Share:

The binary search algorithm is famous for being fast, but just how fast is it? Binary search in DAA (Design and Analysis of Algorithms) is often the go-to example of a logarithmic time algorithm because it dramatically cuts down the work as the input size grows. 

Unlike a linear scan, which might slog through every element, binary search’s divide-and-conquer approach eliminates half of the remaining elements at each step. This difference in strategy leads to very different time complexities. 

In this blog, you’ll learn the ins and outs of time complexity in binary search. You’ll also discover how to express best-case, average-case, and worst-case time complexities in Big O, Θ (Theta), and Ω (Omega) notation. 

Master algorithms and data analysis to boost your tech career. Enroll in our Online Data Science Course and gain in-demand skills today!

Want to strengthen your knowledge of the binary search algorithm before diving into the complexity of binary search algorithm? Check out upGrad’s blog post, What is Binary Search Algorithm?

What is Binary Search Time Complexity?

Time complexity measures how the number of operations grows as input size increases. In binary search, you repeatedly split the data in two until the target is found or the data is fully exhausted. That halving process enormously influences performance, which is why time complexity is a central topic in algorithm analysis.

Binary search time complexity frequently appears in textbooks and courses that teach the Design and Analysis of Algorithms (DAA). These discussions center around how quickly the algorithm narrows the field of possible solutions. 

The answer usually depends on a logarithmic relationship with the input size n. However, there are nuances — best case, average case, and worst case.

  • Best Case [O(1)]: It means that the program needs constant time to perform a particular operation, such as finding an element in constant time, as happens in the case of a dictionary. 
  • Worst Case [O(log n)]: It means that the time depends on the value of n and is directly proportional to the operation’s duration of searching an element in the array of n elements.
  • Average Case [O(log n)]: It is used in cases where you use recursive functions. The time complexity depends on the number of times the loop runs until it breaks. Unlike the worst-case time complexity of the binary search algorithm, it is not reliant on n but dependent upon the number of times the loop operates.  

Here’s an understanding of binary search time complexity in detail:

Cases Description Binary Search Time Complexity Notation
Best Case The scenario where the algorithm finds the target immediately. O(1)
Average Case The typical scenario or expected number of steps for a random target in the array. O(logn)
Worst Case The scenario where the algorithm takes the maximum number of steps O(logn)

Boost your data expertise and start your journey toward a high-demand career with industry-relevant programs:

Why Does Dividing the Data by Two Matter?

A halving step reduces the remaining elements dramatically. Every time you slice your input space in half, you chop out 50% of the possibilities. This contrasts with a simple linear approach that checks each element one by one. 

Dividing by two is crucial for the following reasons:

  • It creates a logarithmic growth pattern. 
  • In simplest terms, if there are n items, each halving step transforms n into n/2, then n/4, and so on. 
  • Soon, you’re left with a single element, which is when the search either succeeds or concludes that nothing was found. That process takes about log₂(n) steps for large n.

Now, let’s explore all the types of time complexity of binary search in detail.

Also Read: Why Is Time Complexity Important: Algorithms, Types & Comparison

What is the Best-case Time Complexity of Binary Search?

When the element you want sits exactly at the midpoint of the array on the first comparison, the algorithm finishes immediately. That scenario requires just one check. Because it only needs that single step, the best-case time complexity is O(1).

Here’s a high-level sequence of what happens in the best case binary search time complexity:

  • You check the middle element
  • It matches the target
  • You stop right away

No matter how big n becomes, you still do that one check if the target is perfectly positioned. Thus, the best case sits at O(1). 

Let’s understand the best-case binary search time completely with the help of an example.

Say your array has 101 elements, and you're searching for the value at index 50 (the middle). 

  • Binary search will compute mid = 50
  • Check the element at index 50
  • See that it matches the target, and stop

In this best-case scenario, the number of comparisons is 1, which is a constant amount, not dependent on n. Thus, the best-case time complexity of binary search is O(1) (constant time). 

Formally, we say Ω(1) for the best case (using Omega notation for best-case lower bound), but it's understood that the best case is constant time.

Why is This Important? 

Constant time is the gold standard – you can’t do better than one step and binary search can achieve that in its best case. However, this best-case scenario is not something you can count on for every search; it’s a theoretical limit when circumstances are perfect. 

It’s analogous to winning the lottery on your first try – great if it happens, but you wouldn’t bet on it every time. Therefore, while you note binary search’s best case is O(1) (or Θ(1) to say it tightly)​, you should care more about the typical (average) or worst-case performance when evaluating algorithms.

Placement Assistance

Executive PG Program13 Months
background

Liverpool John Moores University

Master of Science in Machine Learning & AI

Dual Credentials

Master's Degree19 Months

Want to improve your knowledge of data structures and algorithms? You must enroll in upGard’s free certificate course, Data Structures & Algorithms. Join this Data Structures and Algorithm course to master key concepts with expert-led training and a commitment to a learning time of just 50 hours. 

What is the Worst-Case Time Complexity of Binary Search?

The worst-case for binary search occurs when the element is either not in the array at all or is located at a position that causes the algorithm to eliminate one half each time and only find (or conclude the absence of) the element at the very end of the process. 

Typically, this happens if the target value is in one of the extreme ends of the array (very beginning or very end) or isn't present, and the algorithm has to reduce the search to an empty range.

Consider a sorted array of size n. 

In the worst case, binary search will split the array in half repeatedly until there's only 1 element left to check, and that final check will determine the result. Each comparison cuts the remaining search space roughly in half. How many times can you halve n until you get down to 1 element? This number of halving steps is essentially log₂(n) (the base-2 logarithm of n).

Here’s a clearer breakdown of what happens in worst-case binary search time complexity:

  • If n = 1, you check at most one element (log₂(1) = 0 steps, plus one step actually).
  • If n = 2, you check at most two elements (log₂(2) = 1, so at most 2 comparisons).
  • If n = 8, you check at most four elements (because log₂(8) = 3, and in the worst case you'd do 3+1 comparisons).

In general, if n is a power of 2, say n = 2^k, binary search will take at most k+1 comparisons (k splits plus one final check). 

If n is not an exact power of 2, it will be ⌊log₂(n)⌋+1 comparisons in the worst case​. It’s usually simplified to O(log n) comparisons.

Here’s another way of putting it:

On each step of binary search, you solve a problem of size n/2. So, if you set up a recurrence relation for the time T(n) (number of steps) in the worst case, it looks like this: 

T(n)=T(n/2)+1, with T(1) = 1 (one element takes one check). 

This recurrence solves to T(n) = O(log n). 

Each recursive step or loop iteration does a constant amount of work (one comparison, plus maybe some index arithmetic), and the depth of the recursion (or number of loop iterations) is about log₂(n).

So, the worst-case time complexity of binary search is O(log n) (logarithmic time). This means that even if you have a very large array, the number of steps grows very slowly. 

Let’s understand this through an example:

  • n = 1,000,000 (one million) -> worst-case ~ log₂(1,000,000) ≈ 20 comparisons.
  • n = 1,000,000,000 (one billion) -> worst-case ~ log₂(1,000,000,000) ≈ 30 comparisons. 

Going from a million to a billion elements only adds about 10 extra steps in the worst case! That illustrates how powerful logarithmic time is.

Please note: A comparison here means checking an array element against the target. The actual number of operations might be a small constant multiple of the number of comparisons (due to computing mid index), but Big O ignores those constant factors. So, binary search grows on the order of log₂(n).

It's worth noting that if the target is not present, binary search will still run through the process of narrowing down to an empty range, which is also a worst-case scenario requiring ~log n steps. So, whether the target is at an extreme end or missing entirely, the time complexity is O(log n) in the worst case.

What is the Average-case Time Complexity of Binary Search?

Intuitively, because binary search's behavior is fairly regular for any target position, you might expect the average-case time to also be on the order of log n. Indeed, it is. In fact, for binary search in DAA, the average and worst-case complexity are both O(log n).

However, let's reason it out (or at least give a sense of why that's true).

If you assume the target element is equally likely to be at any position in the array (or even not present at all with some probability), binary search doesn't always examine all log₂(n) levels fully. 

Sometimes, it might find the target a bit earlier than the worst case. But it won't find it in fewer than 1 comparison and won't ever use more than ⌈log₂(n+1)⌉ comparisons (which is worst-case). 

You can actually calculate the exact average number of comparisons by considering all possible target positions and the number of comparisons for each. Without going into too much mathematical detail, the count of comparisons forms a nearly balanced binary decision tree of height ~log₂(n). 

The average number of comparisons turns out to be about log₂(n) - 1 (for large n, roughly one less than the worst-case)​. The dominant term as n grows is still proportional to log n.

For simplicity, you can say the average-case time complexity of binary search is O(log n). In other words, on average, you will still get a logarithmic number of steps.

Let’s understand this through an example:

Suppose you have n = 16 (a small array of 16 sorted numbers). 

Binary search worst-case would take at most 4 comparisons (since 2^4 = 16). 

If you average out the number of comparisons binary search uses for each possible target position (including the scenario where the target isn't found), you'd get an average of around 3 comparisons. That is on the order of log₂(16), which is 4. 

For n = 1,000, worst-case ~10, average might be ~9; both are Θ(log n) essentially.

So, practically speaking, whether you consider random target positions or the worst-case scenario, binary search will run in time proportional to log n. It doesn’t have the big discrepancy some algorithms do between average and worst cases. 

Also Read: Time and Space Complexity in Data Structure

Why is Binary Search O(log n)? (Deriving the Complexity)

Imagine you have n elements. 

Here’s what happens in binary search:

  • After one comparison, you roughly have n/2 elements left to consider (either the left or right half).  
  • After two comparisons, you have about n/4 elements left (half of a half). 
  • After three comparisons, about n/8, and so on. 
  • Essentially, after k comparisons, the search space is about n/(2^k).

Binary search will stop when the search space is down to size 1 (or the element is found earlier). 

So, you ask: for what value of k does n/(2^k) become 1? 

Solve: n / (2^k) = 1

This implies n = 2^k

Now, take log base 2 of both sides: log2(n) = log2(2^k) = k

So, k = log2(n). 

This means if you have k = log₂(n) comparisons, you'll reduce the problem to size 1. 

  • If the element hasn't been found yet, that last element is either the target or it's not in the array at all. 
  • In either case, you would do one final comparison and stop. 

Thus, the number of comparisons is on the order of log₂(n), plus a constant. In Big O terms, that's O(log n).

If n is not an exact power of 2, k=⌊log⁡2(n)⌋ or ⌈log⁡2(n)⌉ – the difference of one step doesn't change the complexity class. 

For example, if n = 100, log₂(100) ≈ 6.64, so binary search might take 6 or 7 comparisons in the worst case.

You can also derive it using a recurrence relation approach, which is common in algorithm analysis:

  • Let T(n) be the worst-case time complexity (number of operations) to binary search in an array of size n.
  • In one step, you do a constant amount of work (the comparison, plus maybe an assignment or two) and reduce the problem to size n/2. So you can write: T(n)=T(n/2)+C, where C is some constant (representing the work done in each step outside the recursive call).
  • The base case: T(1) = D (some constant, e.g., if there's one element, we compare it and either find it or not).
  • Dropping the constant factors and lower-order terms, the dominant term grows with k, which is ~log₂(n). So, T(n)=O(log⁡n).

Also Read: Big O Notation in Data Structure: Everything to Know

What Is Binary Search Time Complexity in Recursive vs Iterative Implementations?

Binary search can be written in a recursive style or an iterative style. Some learners prefer the cleaner recursion look, while others prefer a loop-based approach. But does that choice affect time complexity?

Time-wise, both versions perform the same number of comparisons. Each approach makes a single check per level of recursion or iteration. Since both halve the search space each time, both need about log₂(n) comparisons. The outcome is the same, so both run in O(log⁡n).

Still, there is a subtle difference in space complexity:

  • Iterative: Uses a few index variables and a loop. The extra memory usage doesn’t increase with n, so auxiliary space is O(1).
  • Recursive: Uses a call stack that grows with each recursive call. In the worst case, it goes as deep as log₂(n) calls, so it uses O(log⁡n) space in the worst case.

Below is a compact example demonstrating a recursive approach and an iterative approach. Note that we count comparisons to illustrate how time complexity remains logarithmic in both cases.

Recursive Version

This function accepts an array, a target, and low/high indexes. It checks the middle, decides which half to explore, and recurses. It terminates if it finds the element or if low exceeds high.

def binary_search_recursive(arr, target, low, high, comp_count=0):
    comp_count += 1
    if low > high:
        return -1, comp_count  # not found
    
    mid = (low + high) // 2
    
    if arr[mid] == target:
        return mid, comp_count
    elif arr[mid] < target:
        return binary_search_recursive(arr, target, mid + 1, high, comp_count)
    else:
        return binary_search_recursive(arr, target, low, mid - 1, comp_count)

Code Explanation

  • Each call increments comp_count by 1.
  • The search ends when arr[mid] == target or when low > high.
  • Space usage can grow as deep as the number of calls, which is about log₂(n).

Iterative version

This version loops until it either finds the target or runs out of valid indices.

def binary_search_iterative(arr, target):
    low, high = 0, len(arr) - 1
    comp_count = 0
    
    while low <= high:
        comp_count += 1
        mid = (low + high) // 2
        
        if arr[mid] == target:
            return mid, comp_count
        elif arr[mid] < target:
            low = mid + 1
        else:
            high = mid - 1
    
    return -1, comp_count

Code Explanation:

  • This version also counts comparisons.
  • It uses O(1) additional space beyond the original array since it only relies on lowhighmid, and comp_count.

How are Binary Search Complexities Expressed in Big O, Θ, and Ω Notations?

Let’s explicitly state the time complexity of binary search using the three common asymptotic notations:

  • Big O (O) Notation: It describes an upper bound – how the runtime grows in the worst case as n increases. For binary search, O(log n) is the upper bound​. 

    The algorithm will not take more than some constant c times log₂(n) steps (for sufficiently large n).

  • Big Ω (Omega) Notation: It describes a lower bound – how the runtime grows in the best case. As discussed, binary search’s best case is one comparison, so you can say Ω(1) for the time complexity​. 

    This means no matter how large n gets, you can’t do better than constant time, and binary search indeed achieves constant time when the target is found in the middle immediately.

  • Big Θ (Theta) Notation: It describes a tight bound when an algorithm’s upper and lower bounds are the same order of growth for large n. In many discussions, it’s said that binary search runs in Θ(log n) time​. This implies that proportional to log n is both the typical growth rate and the asymptotic limit. 

    More precisely, if you consider average-case or just the general behavior for large inputs, binary search’s running time grows on the order of log n, and it neither grows faster nor slower than that by more than constant factors. 

    So, Θ(log n) is often used as a shorthand to summarize binary search’s time complexity. 

How Does Input Size Affect Binary Search Performance?

One of the most significant benefits of binary search is how gently its runtime grows as the input size n increases. 

To put it plainly, binary search handles huge increases in n with only modest increases in the number of steps required. If you plot the number of operations (comparisons) binary search needs against the number of elements, you get a logarithmic curve that rises very slowly. 

In contrast, a linear search algorithm produces a straight-line relationship – double the elements, double the steps.

Here’s a graphical comparison of linear vs binary search operations as the array size grows:

Please Note

  • The orange line (linear search, O(n)) rises steeply. At n = 1,000, it reaches 1,000 comparisons.
  • The red line (binary search, O(log n)) stays near the bottom. At n = 1,000, it’s around 10 comparisons. 
  • The annotated points show that for 100 elements, binary search does ~6.6 checks, for 500 elements ~9 checks, and for 1,000 elements ~10 checks.

In the graph above, notice how the binary search line is almost flat relative to the linear search line. This flatness is the hallmark of logarithmic growth. 

For example, increasing the input size from 100 to 1,000 (a tenfold increase in n) only increased the binary search steps from about 7 to about 10. That’s an increase of only 3 steps, versus an increase of 900 steps for linear search over the same range! 

Input size affects binary search in a logarithmic manner: if you square the number of elements, binary search needs just one extra comparison. More generally, if you multiply n by some factor, the number of steps increases by the log of that factor. This is why binary search is ideal for large datasets – it scales gracefully.

To see this in concrete terms, let’s look at a few sample input sizes and how many comparisons linear vs binary search makes in the worst case:

Number of elements (n)

Worst-case checks in Linear Search

Worst-case checks in Binary Search

10 10 4
100 100 7
1,000 1,000 10
1,000,000 (1e6) 1,000,000 ~20
1,000,000,000 (1e9) 1,000,000,000 ~30

As you can see, binary search barely breaks a sweat even as n grows into the millions or billions, while linear search time complexity does a proportional amount of work.

How Does Binary Search Compare to Linear Search in Time Complexity?

Linear search checks each element from start to finish until it either finds the target or reaches the end. It’s easy to write but has a worst-case scenario of n checks for an array of n elements. Binary search, on the other hand, only does about log₂(n) checks even in the worst case.

Here’s a tabulated snapshot of the key differences between linear and binary search.

Aspect

Binary Search

Linear Search

Efficiency Highly efficient for large inputs; ~20 steps for 1,000,000 elements. Slower for large inputs; up to 1,000,000 steps for 1,000,000 elements.
Number of Comparisons Worst case: about log base 2 of n comparisons. Worst case: up to n comparisons.
Data Requirement Requires data to be sorted in advance. No sorting required; works on any data order.
Sorting Overhead Sorting adds O(n log n) time if done before search. Ideal when searching multiple times. No sorting overhead; better suited for one-time lookups in unsorted data.
Cache Performance
  • Accesses memory non-sequentially.
  • May cause cache misses for large arrays.
  • Sequential access
  • Cache-friendly, especially effective for small arrays.
Best Use Case Large sorted datasets with frequent search operations. Small or unsorted datasets, or when only one search is needed.

Linear and Binary Search Worst-case Comparison in Python

Consider the following snippet. It creates a sorted list from 0 to n - 1, then searches for a target not in the list. This ensures the algorithm goes the full length or depth:

def linear_search(arr, target):
    steps = 0
    for x in arr:
        steps += 1
        if x == target:
            return steps
    return steps  # indicates not found in worst-case

def binary_search(arr, target):
    low = 0
    high = len(arr) - 1
    steps = 0
    
    while low <= high:
        steps += 1
        mid = (low + high) // 2
        
        if arr[mid] == target:
            return steps
        elif arr[mid] < target:
            low = mid + 1
        else:
            high = mid - 1
            
    return steps  # worst-case steps if not found

test_sizes = [16, 1000, 1000000]
for n in test_sizes:
    data = list(range(n))  
    # target is outside the range, so the search won't find it
    target = n + 10   
    
    lin_steps = linear_search(data, target)
    bin_steps = binary_search(data, target)
    
    print(f"For n={n}, linear search took {lin_steps} steps, binary search took {bin_steps}.")

Explanation of the code:

  • linear_search: Loops through every element in the list until it finds the target or finishes. You track steps to see how many comparisons occur.
  • binary_search: Splits the list in half each time, incrementing steps per comparison. If the target is not found, it completes the loop after log₂(n) iterations (plus one or so) in the worst case.

Likely output:

  • n=16: linear might do 16 steps, binary might do around 5.
  • n=1,000: linear might do 1,000, binary might do about 10.
  • n=1,000,000: linear might do 1,000,000, binary might do about 20.

When is Linear Search Better?

Linear search has one advantage: it doesn’t require the data to be sorted. Sorting can cost O(n log ⁡n), which might be a big overhead for a one-time lookup in unsorted data. 

Also, if the data set is small, the difference in actual time might be negligible. For instance, searching 20 elements linearly is so quick that the overhead of setting up a binary search might not be worth it.

However, the moment you handle large volumes or multiple searches on stable, sorted data, binary search is the typical recommendation. Its logarithmic time complexity pays off significantly once n is in the thousands, millions, or more.

Want to strengthen your skills in Python? Enroll in upGrad’s free certificate course, Learn Basic Python Programming. This course requires just 12 hours of learning commitment from your side and teaches Python fundamentals through real-world applications and hands-on exercises. 

Conclusion

Binary search in DAA drastically outperforms linear search for large datasets in terms of time complexity. It has a lower growth rate, meaning it scales much better as data size increases​. The trade-off is that data must be sorted, as binary search has a bit more implementation complexity. 

If you ever face a scenario where your data is sorted and you need fast lookups, binary search should, hands down, be your first consideration. It’s no coincidence that many library functions (like C++ STL’s binary_search or Java’s Arrays.binarySearch) implement this algorithm. 

And now, if you have any career-related doubts, you can book a free career counseling call with upGrad’s experts or visit your nearest upGrad offline center.  

Expand your expertise with the best resources available. Browse the programs below to find your ideal fit in Best Machine Learning and AI Courses Online.

Discover in-demand Machine Learning skills to expand your expertise. Explore the programs below to find the perfect fit for your goals.

Discover popular AI and ML blogs and free courses to deepen your expertise. Explore the programs below to find your perfect fit.

Frequently Asked Questions

1. Why is it called binary search?

2. Why is binary search O log n?

3. What is the time complexity of BST?

4. Which search algorithm is fastest?

5. Why is a binary tree log n?

6. What is Big O in binary search?

7. What is n log n time complexity?

8. What is the best case of BST?

9. What is the complexity of merge sort?

10. Is binary search fastest?

11. Is binary search log n or n log n?

Pavan Vadapalli

900 articles published

Get Free Consultation

+91

By submitting, I accept the T&C and
Privacy Policy

India’s #1 Tech University

Executive Program in Generative AI for Leaders

76%

seats filled

View Program

Top Resources

Recommended Programs

LJMU

Liverpool John Moores University

Master of Science in Machine Learning & AI

Dual Credentials

Master's Degree

19 Months

IIITB
bestseller

IIIT Bangalore

Executive Diploma in Machine Learning and AI

Placement Assistance

Executive PG Program

13 Months

upGrad
new course

upGrad

Advanced Certificate Program in GenerativeAI

Generative AI curriculum

Certification

4 months