Binary Search Algorithm: Time and Space Complexity

Binary search is one of the most efficient searching algorithms, known for its speed and low resource usage. But what makes it so fast? In this article, we’ll explore the time and space complexity of binary search, breaking down why it performs so well and how it compares to other algorithms. By the end, you’ll have a clear understanding of the efficiency of binary search and why it’s a favorite among programmers.

Prerequisite: Binary Search Algorithm: Pseudocode and Explanation

The time complexity of an algorithm describes how the runtime grows as the input size increases. For binary search, the time complexity is O(log n), where n is the number of elements in the sorted list.

Why O(log n)?

Binary search works by repeatedly dividing the search range in half. Here’s how it translates to O(log n):

  1. First Iteration: The search range is the entire list (size = n).
  2. Second Iteration: The search range is halved (size = n/2).
  3. Third Iteration: The search range is halved again (size = n/4).
  4. k-th Iteration: The search range is reduced to 1 (size = n/(2^k)).

To find the number of iterations (k), we solve for when the search range becomes 1:

n / (2^k) = 1

n = 2^k

k = log₂(n)

Thus, the number of iterations is proportional to log₂(n), which is why the time complexity is O(log n).

Example

For a list of size 1,000,000, binary search takes a maximum of 20 iterations to find the target: log₂(1,000,000) ≈ 20

Compare this to linear search, which would take up to 1,000,000 iterations in the worst case.

Best and Worst case Time Complexities

  • Best Case: The best-case scenario occurs when the target element is found at the middle of the list in the first iteration. In this case, the algorithm completes in constant time. So the best case time complexity of binary search algorithm is O(1).
  • Worst Case: The worst-case scenario occurs when the target element is either not present in the list or is located at one of the extremes of the list. In this case, the algorithm must continue dividing the list until the search range is reduced to one element. So the worst case time complexity of binary search algorithm is O(log n).

The space complexity of an algorithm describes how much additional memory it uses as the input size increases. For binary search, the space complexity depends on whether the implementation is iterative or recursive.

The iterative implementation uses a constant amount of extra space for variables like low, high, and mid. It does not depend on the input size. So the space complexity of binary search algorithm is O(1) for iterative implementation.

The recursive implementation uses additional space on the call stack for each recursive call. In the worst case, the maximum depth of recursion is log₂(n). So the space complexity of binary search algorithm is O(logn) for recursive implementation

Comparison with Other Search Algorithms

Here’s how binary search compares to other common search algorithms:

Algorithm Time Complexity Space Complexity Notes
Binary Search O(log n) O(1) iterative Requires a sorted list.
Linear Search O(n) O(1) Works on any list.
Hash Table O(1) average O(n) Requires additional memory.

Key Takeaways:

  • Binary Search is much faster than Linear Search for large datasets.
  • Hash Tables are faster than binary search but require more memory.
  • Binary search is ideal for sorted lists where memory usage needs to be minimized.

Practical Implications of O(log n) Time Complexity

The O(log n) time complexity makes binary search highly efficient for large datasets. Here’s why: