Understanding Binary Search Time Complexity: Explore All Cases
Updated on Apr 03, 2025 | 20 min read | 26.3k views
Share:
For working professionals
For fresh graduates
More
Updated on Apr 03, 2025 | 20 min read | 26.3k views
Share:
Table of Contents
The binary search algorithm is famous for being fast, but just how fast is it? Binary search in DAA (Design and Analysis of Algorithms) is often the go-to example of a logarithmic time algorithm because it dramatically cuts down the work as the input size grows.
Unlike a linear scan, which might slog through every element, binary search’s divide-and-conquer approach eliminates half of the remaining elements at each step. This difference in strategy leads to very different time complexities.
In this blog, you’ll learn the ins and outs of time complexity in binary search. You’ll also discover how to express best-case, average-case, and worst-case time complexities in Big O, Θ (Theta), and Ω (Omega) notation.
Master algorithms and data analysis to boost your tech career. Enroll in our Online Data Science Course and gain in-demand skills today!
Time complexity measures how the number of operations grows as input size increases. In binary search, you repeatedly split the data in two until the target is found or the data is fully exhausted. That halving process enormously influences performance, which is why time complexity is a central topic in algorithm analysis.
Binary search time complexity frequently appears in textbooks and courses that teach the Design and Analysis of Algorithms (DAA). These discussions center around how quickly the algorithm narrows the field of possible solutions.
The answer usually depends on a logarithmic relationship with the input size n. However, there are nuances — best case, average case, and worst case.
Here’s an understanding of binary search time complexity in detail:
Cases | Description | Binary Search Time Complexity Notation |
Best Case | The scenario where the algorithm finds the target immediately. | O(1) |
Average Case | The typical scenario or expected number of steps for a random target in the array. | O(logn) |
Worst Case | The scenario where the algorithm takes the maximum number of steps | O(logn) |
Boost your data expertise and start your journey toward a high-demand career with industry-relevant programs:
A halving step reduces the remaining elements dramatically. Every time you slice your input space in half, you chop out 50% of the possibilities. This contrasts with a simple linear approach that checks each element one by one.
Dividing by two is crucial for the following reasons:
Now, let’s explore all the types of time complexity of binary search in detail.
Also Read: Why Is Time Complexity Important: Algorithms, Types & Comparison
When the element you want sits exactly at the midpoint of the array on the first comparison, the algorithm finishes immediately. That scenario requires just one check. Because it only needs that single step, the best-case time complexity is O(1).
Here’s a high-level sequence of what happens in the best case binary search time complexity:
No matter how big n becomes, you still do that one check if the target is perfectly positioned. Thus, the best case sits at O(1).
Let’s understand the best-case binary search time completely with the help of an example.
Say your array has 101 elements, and you're searching for the value at index 50 (the middle).
In this best-case scenario, the number of comparisons is 1, which is a constant amount, not dependent on n. Thus, the best-case time complexity of binary search is O(1) (constant time).
Formally, we say Ω(1) for the best case (using Omega notation for best-case lower bound), but it's understood that the best case is constant time.
Constant time is the gold standard – you can’t do better than one step and binary search can achieve that in its best case. However, this best-case scenario is not something you can count on for every search; it’s a theoretical limit when circumstances are perfect.
It’s analogous to winning the lottery on your first try – great if it happens, but you wouldn’t bet on it every time. Therefore, while you note binary search’s best case is O(1) (or Θ(1) to say it tightly), you should care more about the typical (average) or worst-case performance when evaluating algorithms.
The worst-case for binary search occurs when the element is either not in the array at all or is located at a position that causes the algorithm to eliminate one half each time and only find (or conclude the absence of) the element at the very end of the process.
Typically, this happens if the target value is in one of the extreme ends of the array (very beginning or very end) or isn't present, and the algorithm has to reduce the search to an empty range.
Consider a sorted array of size n.
In the worst case, binary search will split the array in half repeatedly until there's only 1 element left to check, and that final check will determine the result. Each comparison cuts the remaining search space roughly in half. How many times can you halve n until you get down to 1 element? This number of halving steps is essentially log₂(n) (the base-2 logarithm of n).
Here’s a clearer breakdown of what happens in worst-case binary search time complexity:
In general, if n is a power of 2, say n = 2^k, binary search will take at most k+1 comparisons (k splits plus one final check).
If n is not an exact power of 2, it will be ⌊log₂(n)⌋+1 comparisons in the worst case. It’s usually simplified to O(log n) comparisons.
Here’s another way of putting it:
On each step of binary search, you solve a problem of size n/2. So, if you set up a recurrence relation for the time T(n) (number of steps) in the worst case, it looks like this:
T(n)=T(n/2)+1, with T(1) = 1 (one element takes one check).
This recurrence solves to T(n) = O(log n).
Each recursive step or loop iteration does a constant amount of work (one comparison, plus maybe some index arithmetic), and the depth of the recursion (or number of loop iterations) is about log₂(n).
So, the worst-case time complexity of binary search is O(log n) (logarithmic time). This means that even if you have a very large array, the number of steps grows very slowly.
Let’s understand this through an example:
Going from a million to a billion elements only adds about 10 extra steps in the worst case! That illustrates how powerful logarithmic time is.
Please note: A comparison here means checking an array element against the target. The actual number of operations might be a small constant multiple of the number of comparisons (due to computing mid index), but Big O ignores those constant factors. So, binary search grows on the order of log₂(n).
It's worth noting that if the target is not present, binary search will still run through the process of narrowing down to an empty range, which is also a worst-case scenario requiring ~log n steps. So, whether the target is at an extreme end or missing entirely, the time complexity is O(log n) in the worst case.
Intuitively, because binary search's behavior is fairly regular for any target position, you might expect the average-case time to also be on the order of log n. Indeed, it is. In fact, for binary search in DAA, the average and worst-case complexity are both O(log n).
However, let's reason it out (or at least give a sense of why that's true).
If you assume the target element is equally likely to be at any position in the array (or even not present at all with some probability), binary search doesn't always examine all log₂(n) levels fully.
Sometimes, it might find the target a bit earlier than the worst case. But it won't find it in fewer than 1 comparison and won't ever use more than ⌈log₂(n+1)⌉ comparisons (which is worst-case).
You can actually calculate the exact average number of comparisons by considering all possible target positions and the number of comparisons for each. Without going into too much mathematical detail, the count of comparisons forms a nearly balanced binary decision tree of height ~log₂(n).
The average number of comparisons turns out to be about log₂(n) - 1 (for large n, roughly one less than the worst-case). The dominant term as n grows is still proportional to log n.
For simplicity, you can say the average-case time complexity of binary search is O(log n). In other words, on average, you will still get a logarithmic number of steps.
Let’s understand this through an example:
Suppose you have n = 16 (a small array of 16 sorted numbers).
Binary search worst-case would take at most 4 comparisons (since 2^4 = 16).
If you average out the number of comparisons binary search uses for each possible target position (including the scenario where the target isn't found), you'd get an average of around 3 comparisons. That is on the order of log₂(16), which is 4.
For n = 1,000, worst-case ~10, average might be ~9; both are Θ(log n) essentially.
So, practically speaking, whether you consider random target positions or the worst-case scenario, binary search will run in time proportional to log n. It doesn’t have the big discrepancy some algorithms do between average and worst cases.
Also Read: Time and Space Complexity in Data Structure
Imagine you have n elements.
Here’s what happens in binary search:
Binary search will stop when the search space is down to size 1 (or the element is found earlier).
So, you ask: for what value of k does n/(2^k) become 1?
Solve: n / (2^k) = 1
This implies n = 2^k
Now, take log base 2 of both sides: log2(n) = log2(2^k) = k
So, k = log2(n).
This means if you have k = log₂(n) comparisons, you'll reduce the problem to size 1.
Thus, the number of comparisons is on the order of log₂(n), plus a constant. In Big O terms, that's O(log n).
If n is not an exact power of 2, k=⌊log2(n)⌋ or ⌈log2(n)⌉ – the difference of one step doesn't change the complexity class.
For example, if n = 100, log₂(100) ≈ 6.64, so binary search might take 6 or 7 comparisons in the worst case.
You can also derive it using a recurrence relation approach, which is common in algorithm analysis:
Also Read: Big O Notation in Data Structure: Everything to Know
Binary search can be written in a recursive style or an iterative style. Some learners prefer the cleaner recursion look, while others prefer a loop-based approach. But does that choice affect time complexity?
Time-wise, both versions perform the same number of comparisons. Each approach makes a single check per level of recursion or iteration. Since both halve the search space each time, both need about log₂(n) comparisons. The outcome is the same, so both run in O(logn).
Still, there is a subtle difference in space complexity:
Below is a compact example demonstrating a recursive approach and an iterative approach. Note that we count comparisons to illustrate how time complexity remains logarithmic in both cases.
Recursive Version
This function accepts an array, a target, and low/high indexes. It checks the middle, decides which half to explore, and recurses. It terminates if it finds the element or if low exceeds high.
def binary_search_recursive(arr, target, low, high, comp_count=0):
comp_count += 1
if low > high:
return -1, comp_count # not found
mid = (low + high) // 2
if arr[mid] == target:
return mid, comp_count
elif arr[mid] < target:
return binary_search_recursive(arr, target, mid + 1, high, comp_count)
else:
return binary_search_recursive(arr, target, low, mid - 1, comp_count)
Code Explanation
Iterative version
This version loops until it either finds the target or runs out of valid indices.
def binary_search_iterative(arr, target):
low, high = 0, len(arr) - 1
comp_count = 0
while low <= high:
comp_count += 1
mid = (low + high) // 2
if arr[mid] == target:
return mid, comp_count
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1, comp_count
Code Explanation:
Let’s explicitly state the time complexity of binary search using the three common asymptotic notations:
Big O (O) Notation: It describes an upper bound – how the runtime grows in the worst case as n increases. For binary search, O(log n) is the upper bound.
The algorithm will not take more than some constant c times log₂(n) steps (for sufficiently large n).
Big Ω (Omega) Notation: It describes a lower bound – how the runtime grows in the best case. As discussed, binary search’s best case is one comparison, so you can say Ω(1) for the time complexity.
This means no matter how large n gets, you can’t do better than constant time, and binary search indeed achieves constant time when the target is found in the middle immediately.
Big Θ (Theta) Notation: It describes a tight bound when an algorithm’s upper and lower bounds are the same order of growth for large n. In many discussions, it’s said that binary search runs in Θ(log n) time. This implies that proportional to log n is both the typical growth rate and the asymptotic limit.
More precisely, if you consider average-case or just the general behavior for large inputs, binary search’s running time grows on the order of log n, and it neither grows faster nor slower than that by more than constant factors.
So, Θ(log n) is often used as a shorthand to summarize binary search’s time complexity.
One of the most significant benefits of binary search is how gently its runtime grows as the input size n increases.
To put it plainly, binary search handles huge increases in n with only modest increases in the number of steps required. If you plot the number of operations (comparisons) binary search needs against the number of elements, you get a logarithmic curve that rises very slowly.
In contrast, a linear search algorithm produces a straight-line relationship – double the elements, double the steps.
Here’s a graphical comparison of linear vs binary search operations as the array size grows:
Please Note:
In the graph above, notice how the binary search line is almost flat relative to the linear search line. This flatness is the hallmark of logarithmic growth.
For example, increasing the input size from 100 to 1,000 (a tenfold increase in n) only increased the binary search steps from about 7 to about 10. That’s an increase of only 3 steps, versus an increase of 900 steps for linear search over the same range!
Input size affects binary search in a logarithmic manner: if you square the number of elements, binary search needs just one extra comparison. More generally, if you multiply n by some factor, the number of steps increases by the log of that factor. This is why binary search is ideal for large datasets – it scales gracefully.
To see this in concrete terms, let’s look at a few sample input sizes and how many comparisons linear vs binary search makes in the worst case:
Number of elements (n) |
Worst-case checks in Linear Search |
Worst-case checks in Binary Search |
10 | 10 | 4 |
100 | 100 | 7 |
1,000 | 1,000 | 10 |
1,000,000 (1e6) | 1,000,000 | ~20 |
1,000,000,000 (1e9) | 1,000,000,000 | ~30 |
As you can see, binary search barely breaks a sweat even as n grows into the millions or billions, while linear search time complexity does a proportional amount of work.
Linear search checks each element from start to finish until it either finds the target or reaches the end. It’s easy to write but has a worst-case scenario of n checks for an array of n elements. Binary search, on the other hand, only does about log₂(n) checks even in the worst case.
Here’s a tabulated snapshot of the key differences between linear and binary search.
Aspect |
Binary Search |
Linear Search |
Efficiency | Highly efficient for large inputs; ~20 steps for 1,000,000 elements. | Slower for large inputs; up to 1,000,000 steps for 1,000,000 elements. |
Number of Comparisons | Worst case: about log base 2 of n comparisons. | Worst case: up to n comparisons. |
Data Requirement | Requires data to be sorted in advance. | No sorting required; works on any data order. |
Sorting Overhead | Sorting adds O(n log n) time if done before search. Ideal when searching multiple times. | No sorting overhead; better suited for one-time lookups in unsorted data. |
Cache Performance |
|
|
Best Use Case | Large sorted datasets with frequent search operations. | Small or unsorted datasets, or when only one search is needed. |
Consider the following snippet. It creates a sorted list from 0 to n - 1, then searches for a target not in the list. This ensures the algorithm goes the full length or depth:
def linear_search(arr, target):
steps = 0
for x in arr:
steps += 1
if x == target:
return steps
return steps # indicates not found in worst-case
def binary_search(arr, target):
low = 0
high = len(arr) - 1
steps = 0
while low <= high:
steps += 1
mid = (low + high) // 2
if arr[mid] == target:
return steps
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return steps # worst-case steps if not found
test_sizes = [16, 1000, 1000000]
for n in test_sizes:
data = list(range(n))
# target is outside the range, so the search won't find it
target = n + 10
lin_steps = linear_search(data, target)
bin_steps = binary_search(data, target)
print(f"For n={n}, linear search took {lin_steps} steps, binary search took {bin_steps}.")
Explanation of the code:
Likely output:
Linear search has one advantage: it doesn’t require the data to be sorted. Sorting can cost O(n log n), which might be a big overhead for a one-time lookup in unsorted data.
Also, if the data set is small, the difference in actual time might be negligible. For instance, searching 20 elements linearly is so quick that the overhead of setting up a binary search might not be worth it.
However, the moment you handle large volumes or multiple searches on stable, sorted data, binary search is the typical recommendation. Its logarithmic time complexity pays off significantly once n is in the thousands, millions, or more.
Binary search in DAA drastically outperforms linear search for large datasets in terms of time complexity. It has a lower growth rate, meaning it scales much better as data size increases. The trade-off is that data must be sorted, as binary search has a bit more implementation complexity.
If you ever face a scenario where your data is sorted and you need fast lookups, binary search should, hands down, be your first consideration. It’s no coincidence that many library functions (like C++ STL’s binary_search or Java’s Arrays.binarySearch) implement this algorithm.
And now, if you have any career-related doubts, you can book a free career counseling call with upGrad’s experts or visit your nearest upGrad offline center.
Expand your expertise with the best resources available. Browse the programs below to find your ideal fit in Best Machine Learning and AI Courses Online.
Discover in-demand Machine Learning skills to expand your expertise. Explore the programs below to find the perfect fit for your goals.
Discover popular AI and ML blogs and free courses to deepen your expertise. Explore the programs below to find your perfect fit.
Get Free Consultation
By submitting, I accept the T&C and
Privacy Policy
Top Resources