What is Binary Search Algorithm? Its Working and Complexities Explained
Updated on Mar 25, 2025 | 31 min read | 247.8k views
Share:
For working professionals
For fresh graduates
More
Updated on Mar 25, 2025 | 31 min read | 247.8k views
Share:
Table of Contents
When dealing with huge lists of data, how quickly can you find what you need? Imagine having a sorted list of names or numbers and wanting to locate a specific one. If you scanned each entry one by one from the start, it could take a long time for large lists – this is the linear search approach with time complexity proportional to the list size (On).
But there’s a faster method that feels almost like magic: binary search algorithm. This algorithm dramatically cuts down the search time by halving the search space at each step, giving it a much smaller time complexity of O(log n).
In this comprehensive guide, you’ll explore the working of the binary search algorithm and its complexities: binary search time complexity and space complexity. You'll also learn related concepts like binary trees, advantages, limitations, and real-world applications.
Binary search is an efficient algorithm based on the divide-and-conquer principle. It works by repeatedly dividing a sorted dataset in half to narrow down the possible location of a target value.
Unlike scanning sequentially through every element (as the linear search algorithm does), binary search jumps to the middle of the remaining search range and uses that midpoint to decide which half of the data to keep searching. This way, with each comparison, it discards half of the items from consideration, zooming in on the target very quickly.
Let’s break down the basic idea through an example.
Suppose you have a sorted list of numbers from 1 to 100 and are looking for 73. A linear scan might check 1, 2, 3, … and so on until 73, potentially doing 73 comparisons.
Binary search algorithm takes a smarter route:
In just a handful of steps (in this case, about 7 comparisons) you found 73, whereas a linear search might have taken 73 steps. This simple example highlights how binary search drastically reduces the number of checks by halving the search range each time.
Please Note: The array (or list) must be sorted for binary search to work – that’s the key requirement. If the data isn’t sorted, you can’t safely discard half the elements because the target could be anywhere in an unsorted list (We’ll talk later about what to do in unsorted cases and other limitations).
Also Read: Effortless Array Sorting in Python: A Step-by-Step Guide
To formalize the process, here are the typical steps of binary search on a sorted array:
Step 1: Initialize the Search Bounds
Set two pointers or indices:
These define the current interval of the array where the target might be.
Step 2: Find the Midpoint
Calculate the index mid, which is roughly the average of low and high (for example, mid = (low + high) // 2 in integer math). This mid index splits the current interval into two halves.
Step 3: Compare the Midpoint Value to the Target
There are three possible outcomes:
Step 4: Repeat
With the narrowed interval (either the left half or right half from the previous step), go back to step 2. Compute a new midpoint and compare again.
Step 5: Continue Until Found or Interval is Empty
The loop continues until either you find the target or the low pointer crosses the high pointer (meaning the interval is empty and the target isn’t in the array). If the target isn’t found by the time the interval is empty, you conclude the item is not present in the array.
In code or pseudocode form, binary search would look something like this:
function binarySearch(array, target):
low = 0
high = n - 1 (where n is length of array)
while low <= high:
mid = (low + high) // 2
if array[mid] == target:
return mid // found the target at index mid
else if array[mid] < target:
low = mid + 1 // target is in upper half
else:
high = mid - 1 // target is in lower half
end while
return -1 // target not found
This logic applies both to iterative implementations (using a loop) and recursive ones (where the function calls itself on a half of the array).
The essence remains the same:
It’s helpful to compare binary search with linear search to appreciate the efficiency gain:
Linear Search: Start at one end and check each element until you find the target (or reach the end). In the worst case (target is last or not present), you look at every element.
The time complexity for linear search is O(n) (grows linearly with the number of elements). It doesn’t require sorted data.
Binary Search: Repeatedly split the sorted range and eliminate half of the elements at each step. In the worst case, you’ll narrow down to one element without finding the target, having done about log₂(n) comparisons.
Binary search time complexity is O(log n) (grows logarithmically with number of elements). It requires the data to be sorted in order to eliminate halves correctly.
To visualize the difference, consider a list of 1,000,000 elements (n = 1,000,000).
Here’s a side-by-side comparison of linear search and binary search:
Criteria |
Linear Search |
Binary Search |
Data Requirement | Works on unsorted or sorted data. | Requires sorted data for correctness. |
Method | Checks each element one by one in order. | Continuously splits the range in half, checking midpoints. |
Worst-Case Time | O(n) – may scan the entire list. | O(log n) – halves the search space each step. |
Best-Case Time | O(1) – if the target is the first element. | O(1) – if the target happens to be at the first midpoint. |
Comparison Count (example) | ~1,000,000 checks (if n=1e6, target last/not found). | ~20 checks (if n=1e6, target in worst case). |
Ease of Implementation | Very simple to implement. | Simple logic but requires careful handling of indices. |
One of the main reasons the binary search is taught in every Intro to Algorithms course is its impressively efficient time complexity. Time complexity measures how an algorithm's running time increases as the input size (n) grows.
For binary search, the running time grows very slowly compared to linear search. In fact, every time the input size doubles, the binary search only makes one additional comparison (because log₂(2n) = log₂(n) + 1).
Let’s break down the time complexity of binary search in three scenarios:
To give an intuition of how minor log n growth is:
Compare that to linear growth: A billion elements would require a billion steps linearly, but only ~30 steps with binary search – a proof of its efficiency.
Now, let’s discuss all the three time complexity scenarios in detail.
In the best-case scenario of binary search complexity in time, the target element is smack in the middle of the array on the first check.
For example, if your sorted array has 101 elements and your target happens to be the 51st element (the middle one), the binary search will check the middle, find it at once, and return the result.
Only 1 comparison is made. This scenario is independent of how large the array is; it’s just lucky placement. Therefore, the best-case time complexity is O(1) (constant time).
It’s worth noting that best-case doesn’t necessarily mean the element exists in the middle of a full array. If you run binary search on a range and the first mid happens to be the target, that’s a best-case event.
For instance, even if your array had 1,000,000 elements, if by chance the target was at the exact midpoint value, you’d find it in one step.
On average, binary search will still take logarithmic time. To understand why, imagine all possible positions where the target could be (including the possibility that it’s not present).
If the target’s position is uniformly random among these possibilities, you can calculate the expected number of comparisons. The math involves summing up comparisons over all cases and dividing by the number of cases.
Here’s a simpler intuitive explanation:
In the average case, you’ll stop when the search space is down to 1 element (either found or not).
Set n/(2^k) = 1, which gives 2^k = n, so k = log₂(n). This means roughly log₂(n) comparisons in the average case as well.
In plain terms, you can expect binary search to be very fast for large n, usually completing only a handful of steps, even if n is in the millions or billions.
The average-case doesn’t differ from the worst-case by more than a constant factor for binary search.
The worst-case for binary search happens when each time you split, the target is in the very last portion you check (or not in the list at all).
For instance, if the target is the first element in the array, binary search will compare to mid (go left), compare to mid of left half (go left again), and so on until it finally narrows to the first element.
Each comparison eliminated half the array, but you went through the maximum number of halving steps. The number of comparisons in the worst case will be the number of times you can halve n until one element remains.
Setting up the equation as earlier: after k comparisons, you have n/(2^k) elements left. In the worst case, you keep going until just 1 element remains in the search interval, and you still haven’t found the target in previous steps.
That gives n/(2^k) = 1 ⇒ 2^k = n ⇒ k = log₂(n).
So about log₂(n) comparisons are done, then one more comparison to either find the element or conclude it’s not there. You can say the comparisons’ count is on the order of log₂(n) (it might be ⌊log₂(n)⌋ + 1 in exact terms, but constants don’t matter for Big-O). Thus, The worst-case time complexity is O(log n).
To put it another way, every step of binary search gives you a yes/no answer that cuts the remaining possibilities in half.
If you visualize this as a decision tree, the longest path you might traverse in that decision tree is proportional to log₂(n). That’s the worst-case path (each decision discards half until none is left). Even in this worst scenario, binary search is dramatically faster than scanning everything.
One more perspective: For n = 1,048,576 (about 2^20), worst-case binary search would do at most 20 comparisons. For n = 1,099,511,627,776 (which is 2^40, over a trillion elements), worst-case would be at most 40 comparisons. That’s the power of logarithmic time.
Here’s a graphical representation of worst case scenarios of linear search and binary search:
As the chart above shows, binary search requires dramatically fewer comparisons than linear search, even as input size grows.
For example:
This helps highlight just how much more efficient binary search is — especially for large datasets — and why time complexity matters.
After discussing binary search time complexity, it’s also relevant to touch on space complexity – the amount of extra memory an algorithm uses.
For binary search, space complexity depends on the implementation:
This means the depth of recursion is O(log n). The space used by the recursion stack is proportional to the depth of recursion (each level holds some local variables and return addresses). Thus, the space complexity for the recursive version is O(log n) due to the recursion stack.
Aside from the recursion stack in the recursive approach, binary search does not allocate additional data structures proportional to n. We are using the existing array and just moving our pointers around, so binary search is very space-efficient.
In summary:
If space is a constraint, an iterative solution is preferred. If clarity is more important and recursion is acceptable, the slightly higher space overhead might be fine. Either way, the space requirement grows most logarithmically, which is quite modest.
Also Read: Time and Space Complexity in Data Structure: A Detailed Guide
Talking about binary search in the abstract is useful, but seeing it in action can cement understanding. Let's examine two implementations of binary search: one iterative and one recursive.
1. Iterative Binary Search in Python
In this example, we implement a binary search function in Python using a while loop (iterative approach).
def binary_search_rakesh(rakesh_list, target_value):
start_index = 0
end_index = len(rakesh_list) - 1
while start_index <= end_index:
mid_index = (start_index + end_index) // 2
mid_value = rakesh_list[mid_index]
if mid_value == target_value:
return mid_index # target found at mid_index
elif mid_value < target_value:
# Target is larger, ignore left half
start_index = mid_index + 1
else:
# Target is smaller, ignore right half
end_index = mid_index - 1
return -1 # target_value not found in rakesh_list
# Example usage:
numbers = [3, 8, 15, 16, 23, 42, 108] # sorted list
result = binary_search_rakesh(numbers, 16)
print(result) # Output: 3 (since 16 is at index 3 in the list)
Code Explanation:
The function binary_search_rakesh takes a sorted list and a target. It initializes start_index and end_index to the bounds of the list. Inside the loop, it calculates mid_index and retrieves the mid_value.
Then it compares mid_value with target_value:
This iterative method uses a constant amount of extra space and runs in O(log n) time as analyzed.
Also Read: Binary Search Algorithm in Python Explained in Detail
2. Recursive Binary Search in C++
Now, let’s look at a recursive implementation, this time in C++ for variety. We’ll implement a binary search that calls itself on subarrays.
#include <iostream>
using namespace std;
int binarySearchRecursive(int arr[], int amit_low, int pooja_high, int target) {
if (amit_low > pooja_high) {
return -1; // target not found
}
int mid = amit_low + (pooja_high - amit_low) / 2;
if (arr[mid] == target) {
return mid; // found the target
} else if (arr[mid] < target) {
// search in the right half
return binarySearchRecursive(arr, mid + 1, pooja_high, target);
} else {
// search in the left half
return binarySearchRecursive(arr, amit_low, mid - 1, target);
}
}
int main() {
int swati_numbers[] = {2, 5, 8, 12, 16, 23, 38};
int n = sizeof(swati_numbers)/sizeof(swati_numbers[0]);
int targetValue = 16;
int resultIndex = binarySearchRecursive(swati_numbers, 0, n-1, targetValue);
if(resultIndex != -1)
cout << "Found at index " << resultIndex << endl;
else
cout << "Not found" << endl;
return 0;
}
Code Explanation:
Here, binarySearchRecursive is a function that takes an array arr, a lower bound amit_low, an upper bound pooja_high, and the target value.
It checks the base case: if amit_low exceeds pooja_high, the range is empty, and the target isn’t found. Otherwise, it calculates mid as the average of amit_low and pooja_high.
Then it compares arr[mid] with target:
The main function demonstrates how to use this recursive search with an example array swati_numbers. If you run this code, it should output that 16 is found at some index (in the array given, 16 is at index 4, assuming 0-based indexing, so “Found at index 4”).
Both implementations, iterative and recursive, achieve the same result and follow the same divide-and-conquer logic. The difference lies in recursion vs looping and the slight space overhead in recursion.
Also Read: Binary Search in C
So far, you’ve learned binary search in the context of a sorted array or list. There’s a closely related concept data structure called a Binary Search Tree (BST), which is a type of binary tree that also maintains sorted order properties.
A binary search tree is a tree where each node has at most two children (left and right). It satisfies the property that for any node, all values in its left subtree are smaller than the node's value, and all values in its right subtree are greater. This property is essentially what allows binary search to be performed on the tree.
Searching for a value in a BST is analogous to binary search: you compare with the root, then decide to go left or right, then compare, and so on.
The time complexity for searching in a BST depends on the height of the tree (h):
Best/Average case: If the tree is balanced (roughly equal nodes on left and right subtrees at each level), the height h is about log₂(n).
In such cases, searching in the BST takes O(log n) time on average – similar to array binary search complexity.
Worst case: If the tree is completely unbalanced (essentially behaving like a linked list, where each node only has one child), the height becomes n (in the worst case, where each inserted element was greater than the previous, for example).
In this degenerate scenario, searching could take O(n) time because you might have to traverse the tree from root down through each node until the last one. So, the worst-case time complexity for BST search is O(n) in the worst shape tree.
In a balanced BST (an AVL tree or Red-Black tree, which are self-balancing variants of BSTs), the height is kept ~log₂(n), so you get the ideal O(log n) search time consistently.
But in a naive BST (unbalanced), you risk the linear time in the worst case.
In contrast, a binary search on a sorted array always guarantees O(log n) in the worst case because the data structure (array) doesn't skew – it's always a flat array, and our algorithm's behavior doesn't change shape based on input ordering, aside from the number of elements.
It might seem that searching in a BST is less reliable than binary search on an array (due to the skew possibility). However, BSTs have their own advantages:
Maintaining a sorted array can be expensive if you need to insert or delete elements frequently. In an array, inserting an element in sorted order or deleting an element requires shifting elements around, which is O(n) operation.
If you have to keep the array sorted at all times with lots of insertions and deletions, using array + binary search becomes costly.
A BST allows insertion and deletion in O(log n) on average (for balanced BSTs) because you essentially perform a search (O(log n)) to find where to insert or delete and then adjust pointers. Thus, BSTs allow dynamic sorted data with relatively cheap updates.
The trade-off is you need to manage the tree structure (or use a self-balancing tree to avoid the skewed worst-case).
So, when should you use binary search? If you have a dataset that doesn’t change (static array) and you just need to search, a sorted array with binary search is perfect.
When should you use a binary search tree? If you have a dataset that changes (you need to insert/delete keys and also search), a binary search tree is often more suitable to maintain sorted order with efficient search.
In both cases, you get that appealing O(log n) search on average.
Now, in an algorithm interview context, one might ask: “What is the time complexity of searching in a binary search tree?” – the answer would be: O(log n) on average, O(n) in the worst case, comparing to O(log n) worst-case for array binary search.
Here’s how you can avoid the worst-case in trees:
So, binary search is primarily for random-access data structures (arrays or implicit arrays like in an indexable structure) or tree structures that inherently model the binary search decision process.
Also Read: Sorting in Data Structure: Categories & Types Explained
upGrad’s Exclusive Data Science Webinar for you –
Transformation & Opportunities in Analytics & Insights
The history of the binary search algorithm dates back to ancient times when humans were developing manual methods to search for specific elements in a sorted list. While the formal algorithmic description we know today emerged in the field of computer science, the fundamental concept has roots in various historical practices.
Here’s a timeline of how binary search evolved: a bonus section for anyone interested in learning how the algorithm developed and became the best version of itself.
1. Ancient Methods
The basic idea of binary search can be traced back to ancient methods of searching for elements in a sorted list. In ancient manuscripts or books, if someone was looking for a particular passage or information, they might start by opening the book in the middle.
Based on whether the target passage was before or after the midpoint, they would then eliminate half of the remaining pages and repeat the process until they found the desired information.
2. John Mauchly’s Early Use (1946)
The concept of binary search was formalized in the field of electronic computing during the mid-20th century. John Mauchly used a binary search algorithm in 1946. The ENIAC, one of the earliest electronic general-purpose computers, was programmed to perform a binary search on sorted punched cards.
3. Algorithmic Description by Derrick Henry Lehmer (1948)
The algorithmic description of binary search as we recognize it today is credited to Derrick Henry Lehmer, an American mathematician and computer scientist.
Lehmer published a paper in 1948 titled “Teaching an Electronic Computer to Play a Game,” where he described the binary search algorithm as part of a guessing game played on the SWAC (Standards Western Automatic Computer) computer.
4. Inclusion in Sorting and Searching Libraries
As computers evolved, binary search became a fundamental part of sorting and searching libraries. Its efficiency in quickly locating elements in a sorted dataset made it a staple in computer science and programming.
Sorting and searching algorithms, including binary search, played a crucial role in the development of early programming languages and paved the way for more sophisticated algorithms.
5. Algorithmic Analysis and Refinement
Over the years, researchers and computer scientists have analyzed the time and space complexity of the binary search algorithm, leading to a better understanding of its performance characteristics. Algorithmic refinements and adaptations have been proposed to address specific use cases and improve efficiency.
6. Integration into Standard Libraries and Programming Languages
As computing became more widespread, binary search found its way into standard libraries and programming languages. It became a foundational tool for developers working with sorted data structures, arrays, and other collections.
7. Continued Relevance
Despite its ancient roots, the binary search algorithm remains relevant in modern computer science and software development. Its logarithmic time complexity makes it particularly valuable for efficiently searching large datasets, and it continues to be taught in introductory computer science courses.
Binary search is popular for good reason. Let’s outline some of its key advantages and strengths:
Significantly Faster Search: The most obvious advantage is speed. By cutting the search space in half at each step, binary search is much faster than linear search for large datasets.
The binary search time complexity O(log n) grows very slowly; even for millions of elements, the number of comparisons is manageable. If you need to search through a large sorted list, binary search will find your item in a fraction of the time a linear scan would take.
Efficiency Grows with Data Size: The bigger the dataset, the more binary search shines compared to brute-force searching.
That one-more-comparison-per-doubling rule means it scales gracefully. As an example, doubling an array’s size only adds one additional step in the search process – an excellent property for scalability.
Simplicity of Algorithm: Binary search is conceptually simple and elegant. It’s easy to implement (just a few lines of logic) and easy to understand with a little practice.
There are more complex search algorithms out there, but binary search often gives near-best performance with far less complexity in implementation.
Deterministic and Predictable: Binary search’s performance is quite predictable – it will always complete in at most ⌈log₂(n)⌉ steps for a sorted array of size n.
There’s no significant variability or dependence on data patterns (except the minor best-case scenario). This predictability is useful for real-time systems or cases where consistent performance is important.
Almost Optimal for Comparison-Based Search: It’s proven that if all you can do is compare elements, any algorithm that finds an element must have Ω(log n) complexity in the worst case (because each comparison gives at most 1 bit of information and you need ~log₂(n) bits to identify one element among n).
Binary search actually meets this lower bound – it’s asymptotically optimal for comparison-based searching.
And then, there’s one astoundingly distinguishing benefit: a wide range of applications, as listed below.
Beyond searching for exact values, binary search can be adapted to solve many problems:
In short, binary search provides dramatic performance improvements in searching sorted data and is a fundamental tool in any programmer’s toolkit. Whenever you find yourself doing a lot of lookups on sorted data, think about binary search or data structures that utilize it.
While binary search is powerful, it’s not without its caveats and limitations. It’s important to be aware of these so you know when binary search is applicable and when it might not be the best tool.
Here are some key limitations and edge cases:
Requires Sorted Data: This is the big one – binary search only works on data that is sorted (or otherwise structured in a way that you can eliminate half the search space on a comparison).
If your data is unsorted, you cannot directly apply binary search because the halving logic would break. You’d either need to sort the data first or use a different search algorithm.
Sorting the data has its own cost (typically O(n log n)), which might not be worth it if you’re only going to search once. In other words, binary search is optimal when you have sorted input, or you can afford the sorting overhead upfront (like when you’ll do many searches on the same dataset).
Overhead of Maintaining Sorted Order: In scenarios where the dataset is frequently changing (inserting or deleting elements), using binary search on an array can be problematic.
Every insertion or deletion in a sorted array can cost O(n) time (because you may have to shift elements to keep it sorted). If you have to re-sort or heavily maintain sorted order for dynamic data, the overhead might outweigh binary search’s benefit.
For example, if you have to insert many elements in random order and also perform searches, doing a sort before each search (or maintaining a sorted array by insertion) could degrade performance significantly.
In such cases, a balanced BST or another data structure might be more efficient overall.
Not Cache-Friendly for Linked Structures: Binary search shines on random-access arrays where you can index the middle in O(1).
If you tried to apply the binary search concept on a linked list, you’d find that accessing the middle element is O(n) itself (since you have to walk the list). Therefore, binary search isn’t typically used on linked lists.
In a broader sense, binary search works best on structures where you can quickly jump to any position (arrays or memory where elements are contiguous). On linked structures, other methods (like specialized trees or skip lists) are used to achieve log n search.
Handling of Duplicate Elements: If the array contains duplicate values, standard binary search will find an occurrence of the target, but it might not specifically find the first or last occurrence if that matters.
Depending on how it's implemented, binary search could return any one of the indices of a duplicate value, which might be fine for a membership test. However, if you need the leftmost or rightmost occurrence, you have to modify the algorithm slightly.
This isn’t so much a performance issue as it is a correctness/edge-case issue. One must be careful with post-processing or variant techniques (like binary search for the boundary).
Potential for Bugs in Implementation: As simple as binary search is conceptually, it’s notoriously tricky to implement perfectly, especially when coding by hand under pressure. Off-by-one errors in managing the low and high indices are common bugs. Getting stuck in an infinite loop by not updating indices correctly or picking mid in a way that biases to one side can happen.
There’s a famous story about a binary search bug that went unnoticed in Java’s library for nearly 9 years! This isn’t a limitation of the algorithm per se, but a caution: one needs to implement it carefully and test edge cases (like very small arrays or when low and high cross over) to ensure correctness.
Sorted not by Comparable Key: If your data isn’t sorted in a way that corresponds to what you’re searching for, binary search can’t be directly applied.
For example, if you have an array of objects sorted by name and you want to search by age, binary search won’t help unless you sort by age or use a different approach (like separate indexes).
Small Arrays: For very small arrays (say only a few elements), a linear search might be just as fast or even faster in practice due to low constant overhead. Binary search truly shows benefit as n grows large.
That said, this is a minor consideration – the algorithmic complexity matters more as n grows. For tiny n, it usually doesn’t matter what you do.
Binary search isn’t just a theoretical concept – it appears in many real-world computing scenarios, often in places you might not directly realize.
Here are some notable applications and analogies:
Binary search is like the strategy you'd use to find a word in a dictionary or guess a number in a guessing game – it's intuitive once you grasp it and scales incredibly well. Because of its efficiency, it finds its way into countless real-world applications, from low-level system algorithms to high-level library functions and problem-solving patterns.
Whenever you face a problem that involves searching or finding an element in sorted data, binary search should be one of the first approaches you consider. Its time complexity advantage can be game-changing.
With a solid understanding of binary search complexity and proper usage, you’re well-equipped to write efficient search functionality and recognize situations where this classic algorithm can be applied for optimal performance. For any career related guidance, you can book a free career counseling call with upGrad’s experts.
Unlock the power of data with our popular Data Science courses, designed to make you proficient in analytics, machine learning, and big data!
Elevate your career by learning essential Data Science skills such as statistical modeling, big data processing, predictive analytics, and SQL!
Stay informed and inspired with our popular Data Science articles, offering expert insights, trends, and practical tips for aspiring data professionals!
Get Free Consultation
By submitting, I accept the T&C and
Privacy Policy
Start Your Career in Data Science Today
Top Resources