Learning Objectives
By the end of this section, you will be able to:
- Identify and describe the major complexity classes from O(1) to O(n!)
- Recognize code patterns that produce each complexity class
- Understand the practical implications of each class for real-world systems
- Make informed decisions about algorithm selection based on input size and complexity
- Calculate feasibility thresholds for different complexity classes
Why This Matters: When you see code, you should instantly recognize its complexity class like a musician reads sheet music. This skill separates engineers who build scalable systems from those who create systems that collapse under load. Every technical interview will test this, and every production system depends on it.
The Complexity Hierarchy
Complexity classes form a hierarchy from fastest to slowest growing. Understanding this hierarchy is fundamental to algorithm analysis and selection.
Let's visualize how dramatically these differ. For n = 1,000,000 (a million elements, quite modest by modern standards):
| Complexity | Operations | Time at 1B ops/sec | Verdict |
|---|---|---|---|
| O(1) | 1 | 1 nanosecond | ✅ Instant |
| O(log n) | 20 | 20 nanoseconds | ✅ Instant |
| O(√n) | 1,000 | 1 microsecond | ✅ Instant |
| O(n) | 1,000,000 | 1 millisecond | ✅ Fast |
| O(n log n) | 20,000,000 | 20 milliseconds | ✅ Acceptable |
| O(n²) | 10¹² | 11.5 days | ⚠ Problematic |
| O(n³) | 10¹⁸ | 31,700 years | ❌ Impossible |
| O(2ⁿ) | ∞ | Heat death of universe | ❌ Absurd |
| O(n!) | ∞ | Beyond comprehension | ❌ Absurd |
The Great Divide
O(1): Constant Time
Constant time operations complete in the same amount of time regardless of input size. The algorithm does a fixed number of operations—whether working with 10 elements or 10 billion.
The Defining Characteristic
The runtime does not depend on n. There are no loops that scale with input, no recursion that deepens with input, and no data structure traversals.
Common O(1) Operations
| Operation | Data Structure | Why It's O(1) |
|---|---|---|
| Access by index | Array | Direct memory calculation: base + index × size |
| Get/Set first/last | Array, Deque | Index 0 or length-1 is known |
| Push/Pop | Stack | Only touch the top element |
| Enqueue/Dequeue | Circular Queue | Head and tail pointers maintained |
| Insert/Delete at ends | Doubly Linked List | Head/tail pointers maintained |
| Get/Put (average) | Hash Table | Hash computes bucket directly |
| Get min/max | Min/Max Heap | Root element is always the extremum |
The Amortized Caveat
Quick Check
Which of these operations is NOT O(1)?
O(log n): Logarithmic Time
Logarithmic algorithms are extraordinarily efficient. They achieve this by eliminating a constant fraction of possibilities with each step, typically half.
The Core Insight
If you can eliminate half the remaining candidates at each step, you reach an answer in at most steps. This is because:
Starting with n elements and halving k times until 1 remains requires exactly halvings.
The Power of Logarithmic Growth
Logarithms grow incredibly slowly. This table shows why O(log n) algorithms can handle massive inputs:
| Input Size n | log₂(n) | n / log₂(n) |
|---|---|---|
| 1,000 | 10 | 100× better than O(n) |
| 1,000,000 | 20 | 50,000× better |
| 1,000,000,000 | 30 | 33,333,333× better |
| 10¹⁵ | 50 | 2 × 10¹³× better |
Common O(log n) Operations
| Algorithm/Operation | Why It's O(log n) |
|---|---|
| Binary search | Halves search space each step |
| BST search/insert/delete | Tree height is O(log n) for balanced trees |
| Heap insert/delete | Bubble up/down the tree height |
| Exponentiation by squaring | Reduces exponent by half each step |
| GCD (Euclidean algorithm) | Arguments shrink by at least half |
| Finding a number's digits | Divide by 10 until 0 |
Base of the Logarithm
Quick Check
If a search algorithm can examine at most 25 elements before finding (or not finding) the target in a sorted array of 33 million elements, what is its complexity?
O(n): Linear Time
Linear time algorithms process each element of the input exactly once (or a constant number of times). If you double the input, you double the work.
When Is Linear Time Optimal?
Many problems have an lower bound—you must examine every element at least once:
- Finding the maximum in an unsorted array (must check all elements)
- Computing the sum of elements (must see every element)
- Checking if an element exists in an unsorted collection
- Validating that all elements satisfy a property
For these problems, an O(n) algorithm is optimal—you cannot do better.
Linear Time in Practice
O(n) is the sweet spot for many algorithms. It scales well and is often optimal:
| Input Size | Operations | Time at 1B ops/sec |
|---|---|---|
| 1,000 | 1,000 | 1 µs |
| 1,000,000 | 1,000,000 | 1 ms |
| 1,000,000,000 | 1,000,000,000 | 1 second |
| 100,000,000,000 | 10¹¹ | 100 seconds |
Multiple Passes Are Still O(n)
O(n log n): Linearithmic Time
Linearithmic (or "n log n") is the complexity of efficient divide-and-conquer algorithms. It's slightly worse than linear but dramatically better than quadratic.
The Sorting Lower Bound
A fundamental result in computer science: any comparison-based sorting algorithm must make at least comparisons in the worst case. This means:
- Merge sort at O(n log n) is optimal for comparison sorting
- Heap sort and quick sort (average case) achieve this bound
- You cannot sort faster using only comparisons
The Divide-and-Conquer Pattern
O(n log n) typically arises from this pattern:
- Divide: Split the problem into two subproblems of size n/2
- Conquer: Recursively solve each subproblem
- Combine: Merge the results in O(n) time
This gives the recurrence , which solves to .
1// O(n log n) Merge Sort
2
3void merge(int arr[], int left, int mid, int right) {
4 // O(n) merge step
5 int n1 = mid - left + 1;
6 int n2 = right - mid;
7
8 int L[n1], R[n2]; // Temporary arrays
9
10 for (int i = 0; i < n1; i++) L[i] = arr[left + i];
11 for (int j = 0; j < n2; j++) R[j] = arr[mid + 1 + j];
12
13 int i = 0, j = 0, k = left;
14 while (i < n1 && j < n2) {
15 if (L[i] <= R[j])
16 arr[k++] = L[i++];
17 else
18 arr[k++] = R[j++];
19 }
20 while (i < n1) arr[k++] = L[i++];
21 while (j < n2) arr[k++] = R[j++];
22}
23
24void merge_sort(int arr[], int left, int right) {
25 if (left < right) {
26 int mid = left + (right - left) / 2;
27
28 merge_sort(arr, left, mid); // T(n/2)
29 merge_sort(arr, mid + 1, right); // T(n/2)
30 merge(arr, left, mid, right); // O(n)
31 }
32 // Recurrence: T(n) = 2T(n/2) + O(n)
33 // Solution: T(n) = O(n log n)
34}Where O(n log n) Appears
| Algorithm | Task | Why O(n log n) |
|---|---|---|
| Merge Sort | Sorting | Divide by 2, O(n) merge |
| Heap Sort | Sorting | n insertions/deletions, O(log n) each |
| Quick Sort (avg) | Sorting | Expected O(n log n) partitioning |
| FFT | Signal processing | Divide-and-conquer recursion |
| Building a balanced BST | Data structure | n insertions, O(log n) each |
| Closest pair of points | Geometry | Divide-and-conquer approach |
Quick Check
An algorithm makes O(n) passes over the data, and each pass does O(log n) work per element. What is the total complexity?
O(n\u00B2), O(n\u00B3): Polynomial Time
Polynomial time algorithms (O(n^k) for some constant k) are considered "tractable" in complexity theory, but in practice, high-degree polynomials quickly become impractical.
O(n\u00B2): Quadratic Time
Quadratic time typically arises from nested loops where both iterate over the input. Common patterns include:
- Comparing every element with every other element
- Simple sorting algorithms (bubble, selection, insertion)
- Finding all pairs that satisfy some condition
- Dynamic programming with two dimensions of size n
The n\u00B2 Trap
- n = 1,000: 1 million operations (1 ms) \u2014 seems fine
- n = 10,000: 100 million operations (100 ms) \u2014 getting slow
- n = 100,000: 10 billion operations (10 seconds) \u2014 unacceptable
- n = 1,000,000: 1 trillion operations (16 minutes) \u2014 disaster
O(n\u00B3): Cubic Time
Cubic time typically comes from three nested loops or matrix operations:
1// O(n³) Cubic Time Examples
2
3// 1. Naive matrix multiplication
4void matrix_multiply(int A[][N], int B[][N], int C[][N], int n) {
5 for (int i = 0; i < n; i++) { // n iterations
6 for (int j = 0; j < n; j++) { // n iterations
7 C[i][j] = 0;
8 for (int k = 0; k < n; k++) { // n iterations
9 C[i][j] += A[i][k] * B[k][j];
10 }
11 }
12 }
13 // Total: n × n × n = O(n³)
14}
15
16// 2. Floyd-Warshall all-pairs shortest paths
17void floyd_warshall(int dist[][V], int V) {
18 for (int k = 0; k < V; k++) {
19 for (int i = 0; i < V; i++) {
20 for (int j = 0; j < V; j++) {
21 if (dist[i][k] + dist[k][j] < dist[i][j])
22 dist[i][j] = dist[i][k] + dist[k][j];
23 }
24 }
25 }
26}O(n\u00B3) becomes impractical quickly. For n = 1,000, you have 1 billion operations (about 1 second). For n = 10,000, that's a trillion operations (about 16 minutes).
Strassen's Algorithm
O(2\u207F), O(n!): Exponential and Factorial
These complexities represent the boundary between solvable and unsolvable problems at scale. They grow so fast that even small increases in n make problems intractable.
O(2\u207F): Exponential Time
Exponential algorithms typically enumerate all subsets or make binary choices at each step:
| n | 2ⁿ | Time at 1B ops/sec |
|---|---|---|
| 10 | 1,024 | 1 µs |
| 20 | 1,048,576 | 1 ms |
| 30 | 1,073,741,824 | 1 second |
| 40 | 1.1 × 10¹² | 18 minutes |
| 50 | 1.1 × 10¹⁵ | 13 days |
| 60 | 1.2 × 10¹⁸ | 36 years |
| 100 | 1.3 × 10³⁰ | Age of universe × 10¹⁰ |
1// O(2^n) Exponential Time Examples
2
3// 1. Generate all subsets (power set)
4void generate_subsets(int arr[], int n) {
5 // There are 2^n possible subsets
6 for (int mask = 0; mask < (1 << n); mask++) {
7 printf("{ ");
8 for (int i = 0; i < n; i++) {
9 if (mask & (1 << i))
10 printf("%d ", arr[i]);
11 }
12 printf("}\n");
13 }
14}
15
16// 2. Naive Fibonacci (recursive)
17int fib_naive(int n) {
18 if (n <= 1) return n;
19 return fib_naive(n - 1) + fib_naive(n - 2);
20 // Each call makes 2 calls, creating ~2^n total calls
21 // (Actually O(φ^n) where φ ≈ 1.618)
22}
23
24// 3. Subset sum (brute force)
25bool subset_sum_exists(int arr[], int n, int target) {
26 for (int mask = 0; mask < (1 << n); mask++) {
27 int sum = 0;
28 for (int i = 0; i < n; i++) {
29 if (mask & (1 << i))
30 sum += arr[i];
31 }
32 if (sum == target) return true;
33 }
34 return false;
35}O(n!): Factorial Time
Factorial is even worse than exponential. It grows so fast that even n = 20 is essentially unsolvable:
| n | n! | Perspective |
|---|---|---|
| 5 | 120 | Easy |
| 10 | 3,628,800 | 3.6 ms |
| 12 | 479,001,600 | 0.5 seconds |
| 15 | 1.3 × 10¹² | 21 minutes |
| 20 | 2.4 × 10¹⁸ | 77,000 years |
| 25 | 1.6 × 10²⁵ | Age of universe × 10⁶ |
| 100 | 9.3 × 10¹⁵⁷ | More than atoms in observable universe |
1// O(n!) Factorial Time Examples
2
3// Generate all permutations
4void permute(char str[], int l, int r) {
5 if (l == r) {
6 printf("%s\n", str);
7 } else {
8 for (int i = l; i <= r; i++) {
9 swap(&str[l], &str[i]);
10 permute(str, l + 1, r);
11 swap(&str[l], &str[i]); // Backtrack
12 }
13 }
14 // For string of length n: prints n! permutations
15}
16
17// Traveling Salesman Problem (brute force)
18int tsp_brute_force(int graph[][N], int n) {
19 int vertices[n - 1];
20 for (int i = 0; i < n - 1; i++) vertices[i] = i + 1;
21
22 int min_cost = INT_MAX;
23
24 // Try all permutations of vertices 1 to n-1
25 do {
26 int cost = graph[0][vertices[0]];
27 for (int i = 0; i < n - 2; i++)
28 cost += graph[vertices[i]][vertices[i + 1]];
29 cost += graph[vertices[n - 2]][0];
30
31 if (cost < min_cost) min_cost = cost;
32 } while (next_permutation(vertices, n - 1));
33
34 return min_cost;
35 // Complexity: O((n-1)!) ≈ O(n!)
36}When You See Exponential/Factorial
- Accept small n: If n will always be small (say, n < 20), it might be acceptable
- Find a better algorithm: Many exponential problems have polynomial solutions (e.g., dynamic programming for Fibonacci)
- Use approximation: Heuristics and approximation algorithms trade optimality for speed
Interactive: Complexity Class Explorer
Use this interactive tool to explore all complexity classes. Adjust the input size, select different classes, and watch how operations scale. Pay attention to when each class becomes impractical.
Linear Time - O(n)
✓ TractableTime grows proportionally with input size. If input doubles, time doubles. You must examine each element once.
Algorithm Examples
- • Linear search
- • Finding max/min
- • Counting elements
- • Array sum
- • Single pass through data
Real-World Analogy
Counting people in a room - you need to count each person exactly once. More people = more counting time.
Growth Comparison
All Classes at n = 10 (1 billion ops/sec)
| Class | Notation | Operations | Time | Status |
|---|---|---|---|---|
| Constant | O(1) | 1.00 | 1.00 ns | \u2713 |
| Logarithmic | O(log n) | 3.32 | 3.32 ns | \u2713 |
| Square Root | O(√n) | 3.16 | 3.16 ns | \u2713 |
| Linear | O(n) | 10 | 10.00 ns | \u2713 |
| Linearithmic | O(n log n) | 33 | 33.22 ns | \u2713 |
| Quadratic | O(n²) | 100 | 100.00 ns | \u26A0 |
| Cubic | O(n³) | 1.00K | 1.00 µs | \u26A0 |
| Exponential | O(2ⁿ) | 1.02K | 1.02 µs | \u2717 |
| Factorial | O(n!) | 3.63M | 3.63 ms | \u2717 |
Here's another view comparing growth rates visually. Toggle different complexity classes to see how they diverge:
| Function | Complexity | Operations at n=10 |
|---|---|---|
| Constant | O(1) | 1 |
| Logarithmic | O(log n) | 3 |
| Linear | O(n) | 10 |
| Quadratic | O(n²) | 100 |
Practical Thresholds
Given a time budget (e.g., "must complete in 1 second") and assuming 10&sup9; operations per second, here are approximate maximum input sizes for each complexity:
| Complexity | Max n for 1 second | Max n for 1 minute | Max n for 1 hour |
|---|---|---|---|
| O(1) | Any | Any | Any |
| O(log n) | 10¹⁰⁰⁰⁰⁰⁰⁰⁰⁰ | 10¹⁰⁰⁰⁰⁰⁰⁰⁰⁰ | 10¹⁰⁰⁰⁰⁰⁰⁰⁰⁰ |
| O(n) | 10⁹ | 6 × 10¹⁰ | 3.6 × 10¹² |
| O(n log n) | 4 × 10⁷ | 2 × 10⁹ | 10¹¹ |
| O(n²) | 31,623 | 244,949 | 1,897,367 |
| O(n³) | 1,000 | 3,915 | 15,326 |
| O(2ⁿ) | 30 | 36 | 42 |
| O(n!) | 12 | 13 | 15 |
Rule of Thumb for Interviews
When given a time constraint in a coding interview, use these guidelines:
| Input Size Range | Target Complexity | Typical Algorithm |
|---|---|---|
| n ≤ 10 | O(n!) or O(2ⁿ) | Brute force, all permutations |
| n ≤ 20 | O(2ⁿ) | Bitmask DP, subset enumeration |
| n ≤ 500 | O(n³) | Floyd-Warshall, cubic DP |
| n ≤ 5,000 | O(n²) | Nested loops, quadratic DP |
| n ≤ 10⁶ | O(n log n) | Sorting-based, divide and conquer |
| n ≤ 10⁸ | O(n) | Linear scan, hash table |
| n > 10⁸ | O(log n) or O(1) | Binary search, math formula |
Quick Check
A problem has n = 100,000. The naive O(n\u00B2) solution times out. What complexity should you target?
Recognizing Complexity in Code
With practice, you should instantly recognize the complexity of code patterns. Here's a cheat sheet:
Pattern Recognition Guide
| Code Pattern | Complexity | Key Indicator |
|---|---|---|
| No loops, no recursion | O(1) | Fixed number of operations |
| Loop where i doubles: i *= 2 | O(log n) | Iteration count: log n |
| Loop where i halves: i /= 2 | O(log n) | Halving reduces to log n |
| Single loop: for i in 0..n | O(n) | Iteration count: n |
| Two sequential loops | O(n) | O(n) + O(n) = O(n) |
| Outer loop, inner loop halving | O(n log n) | n × log n |
| Divide in half + O(n) merge | O(n log n) | Master theorem |
| Two nested loops 0..n | O(n²) | n × n iterations |
| Loop 0..n, inner 0..i | O(n²) | 1 + 2 + ... + n = O(n²) |
| Three nested loops | O(n³) | n × n × n iterations |
| Recursive calls on all subsets | O(2ⁿ) | 2ⁿ subsets |
| Generating permutations | O(n!) | n! permutations |
Quick Test: What's the Complexity?
1// What is the complexity of each function?
2
3// Function A
4int func_a(int n) {
5 int sum = 0;
6 for (int i = 1; i <= n; i *= 2) // How many iterations?
7 sum += i;
8 return sum;
9}
10
11// Function B
12void func_b(int n) {
13 for (int i = 0; i < n; i++) // n iterations
14 for (int j = 0; j < 100; j++) // 100 iterations (constant!)
15 printf("*");
16}
17
18// Function C
19int func_c(int n) {
20 if (n <= 1) return n;
21 return func_c(n/2) + func_c(n/2); // Two recursive calls
22}
23
24// Function D
25void func_d(int n) {
26 for (int i = 0; i < n; i++)
27 for (int j = i; j < n; j++)
28 printf("%d %d\n", i, j);
29}Answers
- A: O(log n) \u2014 i doubles each iteration, so log\u2082(n) iterations
- B: O(n) \u2014 The inner loop runs 100 times (constant), so 100n = O(n)
- C: O(n) \u2014 Despite two recursive calls, each only does O(1) work. Total work at level k is 2^k elements of size n/2^k. Sum = O(n).
- D: O(n\u00B2) \u2014 Triangular pattern: n + (n-1) + ... + 1 = n(n+1)/2 = O(n\u00B2)
Summary
You now have a comprehensive understanding of the complexity class hierarchy:
| Class | Growth | Practical Limit | Examples |
|---|---|---|---|
| O(1) | Constant | Any size | Array access, hash lookup |
| O(log n) | Logarithmic | Any size | Binary search, BST operations |
| O(n) | Linear | ~10⁹ | Linear search, single pass |
| O(n log n) | Linearithmic | ~10⁸ | Merge sort, heap sort |
| O(n²) | Quadratic | ~10⁴ | Bubble sort, all pairs |
| O(n³) | Cubic | ~10³ | Matrix multiply, Floyd-Warshall |
| O(2ⁿ) | Exponential | ~30 | Subset enumeration |
| O(n!) | Factorial | ~12 | Permutation enumeration |
Key Takeaways
- The hierarchy matters: O(n log n) is dramatically better than O(n\u00B2) at scale
- Polynomial vs exponential: O(n\u00B3) is slow but tractable; O(2\u207F) is intractable for large n
- Recognize patterns: Nested loops usually mean O(n\u00B2), halving means O(log n)
- Match complexity to input size: Know the practical limits for your time budget
- Always ask: Can I do better? If O(n\u00B2) exists, often O(n log n) does too
Exercises
Conceptual Questions
- Why is O(n log n) considered the "gold standard" for sorting? What prevents us from sorting faster using comparisons?
- Explain why O(1000n) = O(n) but O(n\u00B2) \u2260 O(n), even though 1000 > n for small n.
- A problem has input size n = 50. Your boss says an O(2\u207F) solution is "fine since n is small." Is this reasonable? Calculate the operations.
- Why does the loop "for (i = n; i > 0; i /= 2)" run O(log n) times, not O(n/2) times?
Complexity Analysis
- Determine the complexity:📄exercise1.c
1for (int i = 0; i < n; i++) 2 for (int j = 0; j < n; j++) 3 for (int k = 0; k < n; k++) 4 if (arr[i] + arr[j] + arr[k] == target) 5 return true; - Determine the complexity:📄exercise2.c
1for (int i = 1; i < n; i *= 2) 2 for (int j = 0; j < n; j++) 3 sum += arr[j]; - Determine the complexity:📄exercise3.c
1int i = n; 2while (i > 0) { 3 for (int j = 0; j < i; j++) 4 printf("*"); 5 i /= 2; 6} - A function has the recurrence T(n) = T(n/2) + n. What is its complexity? (Hint: expand the recurrence)
Design Questions
- You need to find if any two numbers in an unsorted array sum to a target value. The naive approach is O(n\u00B2). Can you achieve O(n)? (Hint: hash table)
- Given a sorted array, finding a single element is O(log n). How would you modify binary search to count all occurrences of an element? What's the complexity?
Solution Hints
- Ex 1: Three nested loops, each O(n): O(n\u00B3)
- Ex 2: Outer loop runs log n times, inner runs n times: O(n log n)
- Ex 3: Work per level: n + n/2 + n/4 + ... = O(n) total (geometric series)
- Ex 4: T(n) = n + n/2 + n/4 + ... = 2n = O(n)
- Design 1: Use a hash set. For each element, check if (target - element) exists. O(n).
- Design 2: Binary search for leftmost and rightmost occurrence. O(log n) total.
In the next section, we'll explore Space Complexity—analyzing not just the time an algorithm takes, but the memory it consumes. You'll learn that sometimes trading time for space (or vice versa) is the key to solving a problem efficiently.