logo
Published on

Big O in Practice

Authors

What it is

Big O describes how an algorithm's runtime or memory usage grows as the input size n increases. It focuses on asymptotic behavior - the growth trend when n gets very large - ignoring small details like constants, and typically considering the worst case.

Example: an algorithm with cost 3n² + 5n + 10 is classified as O(n²).

Why it matters

  • Predict scalability before shipping to production.
  • Compare solutions without needing benchmarks.
  • Foundation for architecture and design decisions.
  • O (Big O) - upper bound (worst case).
  • Ω (Big Omega) - lower bound (best case).
  • Θ (Big Theta) - tight bound (when best and worst cases match).

In practice, "Big O" is used as a generic synonym for "complexity", even when it would technically be Θ.


Time Complexity

Growth order, from fastest to slowest:

O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(n³) < O(2ⁿ) < O(n!)

O(1) - Constant

Runtime doesn't depend on input size.

function getFirst(arr) {
  return arr[0];
}

Real-world examples: array access by index, push/pop on a stack, get/put on a HashMap (amortized), reading a variable.


O(log n) - Logarithmic

At each iteration, the problem is reduced by a fixed fraction (usually by half).

function binarySearch(arr, target) {
  let left = 0;
  let right = arr.length - 1;
  while (left <= right) {
    const mid = Math.floor((left + right) / 2);
    if (arr[mid] === target) return mid;
    if (arr[mid] < target) left = mid + 1;
    else right = mid - 1;
  }
  return -1;
}

Real-world examples: binary search, operations on balanced trees (AVL, Red-Black), B-Tree lookups (database indexes).


O(n) - Linear

Iterates through every element once.

function sum(arr) {
  let total = 0;
  for (const value of arr) total += value;
  return total;
}

Real-world examples: linear search, array copy, linked list traversal, validating every item in a form.


O(n log n) - Linearithmic

Divides the problem into log n levels and processes n elements at each level. It's the theoretical lower bound for comparison-based sorting algorithms.

function mergeSort(arr) {
  if (arr.length <= 1) return arr;
  const mid = Math.floor(arr.length / 2);
  const left = mergeSort(arr.slice(0, mid));
  const right = mergeSort(arr.slice(mid));
  return merge(left, right);
}

function merge(left, right) {
  const result = [];
  let i = 0;
  let j = 0;
  while (i < left.length && j < right.length) {
    if (left[i] <= right[j]) result.push(left[i++]);
    else result.push(right[j++]);
  }
  return [...result, ...left.slice(i), ...right.slice(j)];
}

Real-world examples: Merge Sort, Quick Sort (average case), Heap Sort, Timsort - the default algorithm behind Arrays.sort() in Java, list.sort() in Python, and Array.prototype.sort() in modern JS engines.


O(n²) - Quadratic

Two nested loops over the input.

function bubbleSort(arr) {
  for (let i = 0; i < arr.length; i++) {
    for (let j = 0; j < arr.length - i - 1; j++) {
      if (arr[j] > arr[j + 1]) {
        [arr[j], arr[j + 1]] = [arr[j + 1], arr[j]];
      }
    }
  }
  return arr;
}

Real-world examples: Bubble Sort, Insertion Sort, Selection Sort, comparing every pair in a collection, naive duplicate detection.


O(n³) - Cubic

Three nested loops.

function multiplyMatrices(a, b, n) {
  const result = Array.from({ length: n }, () => new Array(n).fill(0));
  for (let i = 0; i < n; i++) {
    for (let j = 0; j < n; j++) {
      for (let k = 0; k < n; k++) {
        result[i][j] += a[i][k] * b[k][j];
      }
    }
  }
  return result;
}

Real-world examples: naive matrix multiplication, Floyd-Warshall (all-pairs shortest paths in a graph).


O(2ⁿ) - Exponential

Each element spawns two branches. Infeasible starting around n ≈ 30.

function fib(n) {
  if (n < 2) return n;
  return fib(n - 1) + fib(n - 2);
}

Real-world examples: recursive Fibonacci without memoization, brute-force Subset Sum, generating all subsets of a set.


O(n!) - Factorial

Explores every possible permutation. Infeasible already at n ≈ 12.

function permutations(arr) {
  if (arr.length <= 1) return [arr];
  const result = [];
  for (let i = 0; i < arr.length; i++) {
    const rest = [...arr.slice(0, i), ...arr.slice(i + 1)];
    for (const p of permutations(rest)) {
      result.push([arr[i], ...p]);
    }
  }
  return result;
}

Real-world examples: brute-force Traveling Salesman, generating all permutations/anagrams.


Space Complexity

Measures the additional space an algorithm consumes as a function of n. It doesn't count the input itself, only auxiliary memory allocated (variables, created structures, call stack).

O(1) - Constant

Uses a fixed amount of memory, independent of n.

function max(arr) {
  let result = -Infinity;
  for (const value of arr) {
    if (value > result) result = value;
  }
  return result;
}

Real-world examples: Bubble Sort (in-place), variable swap, counters, two-pointer without extra allocation.


O(log n) - Logarithmic

Common in recursion that splits the problem in half - the call stack grows proportionally to log n.

function binarySearchRecursive(arr, target, left, right) {
  if (left > right) return -1;
  const mid = Math.floor((left + right) / 2);
  if (arr[mid] === target) return mid;
  if (arr[mid] < target) return binarySearchRecursive(arr, target, mid + 1, right);
  return binarySearchRecursive(arr, target, left, mid - 1);
}

Real-world examples: recursive binary search, Quick Sort (average-case stack space).


O(n) - Linear

Allocates a structure proportional to the input.

function double(arr) {
  return arr.map(x => x * 2);
}

Real-world examples: creating a copy of an array, recursion with depth n (recursive factorial), HashMap with n keys, simple memoized Fibonacci.


O(n log n) - Linearithmic

Non in-place Merge Sort allocates temporary arrays at each level of the division.

Real-world examples: Merge Sort with auxiliary buffer, some divide-and-conquer algorithms.


O(n²) - Quadratic

An n × n matrix. Common in dynamic programming over pairs.

function createAdjacencyMatrix(n) {
  return Array.from({ length: n }, () => new Array(n).fill(0));
}

Real-world examples: graph adjacency matrix, DP tables for Longest Common Subsequence and Edit Distance.


Growth table

Approximate number of operations for each class as n grows:

nO(1)O(log n)O(n)O(n log n)O(n²)O(2ⁿ)
101~310~331001,024
1001~7100~66410,000~10³⁰
1,0001~101,000~9,9661,000,000impractical
1,000,0001~2010⁶~2 × 10⁷10¹² (infeasible)-

How to analyze in practice

  1. Sequential loops add up: O(n) + O(n) = O(n).
  2. Nested loops multiply: two loops of size nO(n²).
  3. Constants and lower-order terms are dropped: O(3n² + 100n + 5) = O(n²).
  4. Worst case is the default, but also consider the amortized case - e.g., ArrayList.add in Java is amortized O(1) despite occasional O(n) resizes.
  5. Different variables stay separate: loops over collections a and bO(a + b), not O(n). This matters when the sizes differ by orders of magnitude.
  6. Recursion: time = calls × work per call; space = maximum stack depth.

Common pitfalls

  • Array.includes(), indexOf(), in on an array → O(n), not O(1). For fast lookup use Set or Map.
  • String concatenation inside a loop in many languages → O(n²) due to immutability. Use StringBuilder (Java) or array.join() (JS).
  • slice()/concat()O(n) per call. Inside a loop, they turn into a hidden O(n²).
  • Recursion without memoization (e.g., Fibonacci) → explodes to O(2ⁿ). With memoization it drops to O(n) time and O(n) space.
  • I/O and network complexity almost always dominate CPU - optimizing from O(n²) to O(n log n) is irrelevant if the code makes 1,000 sequential HTTP calls.
  • Stream pipelines in Java and chained .map().filter().reduce() in JS hide multiple passes over the collection - each operation is O(n), so the chain becomes O(kn) with k = number of operations.

Quick reference by data structure

StructureAccessSearchInsertionDeletion
ArrayO(1)O(n)O(n)O(n)
Linked ListO(n)O(n)O(1)*O(1)*
Hash TableO(1)†O(1)†O(1)†O(1)†
Binary Search TreeO(log n)†O(log n)†O(log n)†O(log n)†
HeapO(1) topO(n)O(log n)O(log n)
Stack / Queue-O(n)O(1)O(1)
  • Assuming direct access to the node (insertion/removal at a known position). † Average/amortized case. Worst case can be O(n) for hash tables with heavy collisions and unbalanced BSTs.