Chapter 2
20 min read
Section 8 of 261

Common Complexity Classes

Complexity Analysis

Learning Objectives

By the end of this section, you will be able to:

  1. Identify and describe the major complexity classes from O(1) to O(n!)
  2. Recognize code patterns that produce each complexity class
  3. Understand the practical implications of each class for real-world systems
  4. Make informed decisions about algorithm selection based on input size and complexity
  5. Calculate feasibility thresholds for different complexity classes
Why This Matters: When you see code, you should instantly recognize its complexity class like a musician reads sheet music. This skill separates engineers who build scalable systems from those who create systems that collapse under load. Every technical interview will test this, and every production system depends on it.

The Complexity Hierarchy

Complexity classes form a hierarchy from fastest to slowest growing. Understanding this hierarchy is fundamental to algorithm analysis and selection.

O(1)<O(logn)<O(n)<O(n)<O(nlogn)<O(n2)<O(n3)<O(2n)<O(n!)O(1) < O(\log n) < O(\sqrt{n}) < O(n) < O(n \log n) < O(n^2) < O(n^3) < O(2^n) < O(n!)

Let's visualize how dramatically these differ. For n = 1,000,000 (a million elements, quite modest by modern standards):

ComplexityOperationsTime at 1B ops/secVerdict
O(1)11 nanosecond✅ Instant
O(log n)2020 nanoseconds✅ Instant
O(√n)1,0001 microsecond✅ Instant
O(n)1,000,0001 millisecond✅ Fast
O(n log n)20,000,00020 milliseconds✅ Acceptable
O(n²)10¹²11.5 days⚠ Problematic
O(n³)10¹⁸31,700 years❌ Impossible
O(2ⁿ)Heat death of universe❌ Absurd
O(n!)Beyond comprehension❌ Absurd

The Great Divide

There's a critical boundary between polynomial time (O(n^k) for some constant k) and exponential/factorial time. Polynomial algorithms are generally considered "tractable" or "efficient," while exponential algorithms are "intractable" for large inputs. This distinction is central to computational complexity theory and the famous P vs NP problem.

O(1): Constant Time

Constant time operations complete in the same amount of time regardless of input size. The algorithm does a fixed number of operations—whether working with 10 elements or 10 billion.

The Defining Characteristic

The runtime does not depend on n. There are no loops that scale with input, no recursion that deepens with input, and no data structure traversals.

O(1) Constant Time Examples
📄constant_time.c
2Array Access

Accessing arr[i] directly computes the memory address as base + i * element_size. This is a single arithmetic operation regardless of array size.

EXAMPLE
arr[500] takes the same time as arr[0]
6Hash Table Lookup

Computing the hash and accessing the bucket is O(1) on average. The hash function runs in constant time for fixed-size keys.

EXAMPLE
Finding a key in a dictionary with 1 or 1 million entries takes the same average time
10Stack Operations

Push and pop only interact with the top element. No traversal needed, so time is constant.

14Arithmetic Operations

Basic math on fixed-size integers/floats completes in one CPU cycle. The result doesn&apos;t depend on the magnitude of n.

18 lines without explanation
1// O(1) Examples in C
2
3// 1. Direct array access
4int get_element(int arr[], int i) {
5    return arr[i];  // One memory operation
6}
7
8// 2. Hash table lookup (average case)
9int get_from_hash(HashMap* map, int key) {
10    int bucket = hash(key) % map->size;  // One hash + one modulo
11    return map->buckets[bucket]->value;  // Direct access
12}
13
14// 3. Stack push/pop
15void push(Stack* s, int value) {
16    s->data[++s->top] = value;  // Increment and assign
17}
18
19// 4. Simple arithmetic
20int compute(int a, int b, int c) {
21    return a * b + c / 2 - 7;  // Fixed operations
22}

Common O(1) Operations

OperationData StructureWhy It&apos;s O(1)
Access by indexArrayDirect memory calculation: base + index × size
Get/Set first/lastArray, DequeIndex 0 or length-1 is known
Push/PopStackOnly touch the top element
Enqueue/DequeueCircular QueueHead and tail pointers maintained
Insert/Delete at endsDoubly Linked ListHead/tail pointers maintained
Get/Put (average)Hash TableHash computes bucket directly
Get min/maxMin/Max HeapRoot element is always the extremum

The Amortized Caveat

Some O(1) claims are "amortized"—meaning occasionally an operation takes longer (like resizing a dynamic array), but averaged over many operations, the cost per operation is constant. We'll explore this in detail in the Amortized Analysis section.

Quick Check

Which of these operations is NOT O(1)?


O(log n): Logarithmic Time

Logarithmic algorithms are extraordinarily efficient. They achieve this by eliminating a constant fraction of possibilities with each step, typically half.

The Core Insight

If you can eliminate half the remaining candidates at each step, you reach an answer in at most log2n\log_2 n steps. This is because:

n2k=1    k=log2n\frac{n}{2^k} = 1 \implies k = \log_2 n

Starting with n elements and halving k times until 1 remains requires exactly log2n\log_2 n halvings.

Binary Search \u2014 The Classic O(log n) Algorithm
📄logarithmic_time.c
2Binary Search Setup

Initialize pointers to the full range. The magic happens in how we shrink this range.

5The Halving Loop

Each iteration eliminates HALF the remaining elements. This is the signature of O(log n). After k iterations, only n/2^k elements remain.

EXAMPLE
1000 elements → 500 → 250 → 125 → ... → 1 (about 10 steps)
6Compute Midpoint

Using (left + right) / 2 would overflow for large values. This formula is overflow-safe.

10Eliminate Left Half

Target is larger than middle, so all elements left of mid are too small. We just eliminated half with one comparison!

12Eliminate Right Half

Target is smaller than middle, so all elements right of mid are too large. Another half eliminated!

18 lines without explanation
1// O(log n) Binary Search in C
2
3int binary_search(int arr[], int n, int target) {
4    int left = 0, right = n - 1;
5
6    while (left <= right) {
7        int mid = left + (right - left) / 2;  // Avoid overflow
8
9        if (arr[mid] == target)
10            return mid;       // Found!
11        else if (arr[mid] < target)
12            left = mid + 1;   // Eliminate left half
13        else
14            right = mid - 1;  // Eliminate right half
15    }
16    return -1;  // Not found
17}
18
19// Time complexity analysis:
20// - Each iteration halves the search space
21// - Starting with n, after k iterations: n/2^k elements remain
22// - Loop ends when n/2^k = 1, i.e., k = log₂(n)
23// - Total: O(log n)

The Power of Logarithmic Growth

Logarithms grow incredibly slowly. This table shows why O(log n) algorithms can handle massive inputs:

Input Size nlog₂(n)n / log₂(n)
1,00010100× better than O(n)
1,000,0002050,000× better
1,000,000,0003033,333,333× better
10¹⁵502 × 10¹³× better

Common O(log n) Operations

Algorithm/OperationWhy It&apos;s O(log n)
Binary searchHalves search space each step
BST search/insert/deleteTree height is O(log n) for balanced trees
Heap insert/deleteBubble up/down the tree height
Exponentiation by squaringReduces exponent by half each step
GCD (Euclidean algorithm)Arguments shrink by at least half
Finding a number&apos;s digitsDivide by 10 until 0

Base of the Logarithm

We often write O(log n) without specifying the base because logan=logbnlogba\log_a n = \frac{\log_b n}{\log_b a}. Different bases differ only by a constant factor, which Big-O ignores. So O(log\u2082 n) = O(log\u2081\u2080 n) = O(ln n) = O(log n).

Quick Check

If a search algorithm can examine at most 25 elements before finding (or not finding) the target in a sorted array of 33 million elements, what is its complexity?


O(n): Linear Time

Linear time algorithms process each element of the input exactly once (or a constant number of times). If you double the input, you double the work.

When Is Linear Time Optimal?

Many problems have an Ω(n)\Omega(n) lower bound—you must examine every element at least once:

  • Finding the maximum in an unsorted array (must check all elements)
  • Computing the sum of elements (must see every element)
  • Checking if an element exists in an unsorted collection
  • Validating that all elements satisfy a property

For these problems, an O(n) algorithm is optimal—you cannot do better.

O(n) Linear Time Examples
📄linear_time.c
2Initialize Variables

O(1) setup before the main work. These constants don&apos;t affect the overall complexity.

4Process Each Element

The loop runs exactly n times. Each iteration does O(1) work (comparison, possible update, addition). Total: n × O(1) = O(n).

EXAMPLE
For n = 1,000,000: exactly 1,000,000 iterations
5Constant Work Per Element

Comparison with current max and potential update. Both O(1) operations. This is why the total is O(n), not O(n²).

8Accumulation

Adding to sum is O(1). Even if we do multiple O(1) operations per iteration (like 5 or 10), it&apos;s still O(n) overall.

31 lines without explanation
1// O(n) Linear Time Examples in C
2
3// Finding maximum and computing sum
4void analyze_array(int arr[], int n, int* max_out, int* sum_out) {
5    int max_val = arr[0];
6    int sum = 0;
7
8    for (int i = 0; i < n; i++) {
9        if (arr[i] > max_val)
10            max_val = arr[i];
11        sum += arr[i];
12    }
13
14    *max_out = max_val;
15    *sum_out = sum;
16}
17
18// Linear search
19int linear_search(int arr[], int n, int target) {
20    for (int i = 0; i < n; i++) {
21        if (arr[i] == target)
22            return i;
23    }
24    return -1;
25}
26
27// Counting occurrences
28int count_value(int arr[], int n, int value) {
29    int count = 0;
30    for (int i = 0; i < n; i++) {
31        if (arr[i] == value)
32            count++;
33    }
34    return count;
35}

Linear Time in Practice

O(n) is the sweet spot for many algorithms. It scales well and is often optimal:

Input SizeOperationsTime at 1B ops/sec
1,0001,0001 µs
1,000,0001,000,0001 ms
1,000,000,0001,000,000,0001 second
100,000,000,00010¹¹100 seconds

Multiple Passes Are Still O(n)

If you iterate through an array twice, three times, or even ten times (a constant number), the complexity remains O(n). We write O(3n) = O(n) because constants are dropped. However, nested loops where each runs n times give O(n\u00B2), not O(2n).

O(n log n): Linearithmic Time

Linearithmic (or "n log n") is the complexity of efficient divide-and-conquer algorithms. It's slightly worse than linear but dramatically better than quadratic.

The Sorting Lower Bound

A fundamental result in computer science: any comparison-based sorting algorithm must make at least Ω(nlogn)\Omega(n \log n) comparisons in the worst case. This means:

  • Merge sort at O(n log n) is optimal for comparison sorting
  • Heap sort and quick sort (average case) achieve this bound
  • You cannot sort faster using only comparisons
Comparison sorting lower bound: Ω(nlogn)\text{Comparison sorting lower bound: } \Omega(n \log n)

The Divide-and-Conquer Pattern

O(n log n) typically arises from this pattern:

  1. Divide: Split the problem into two subproblems of size n/2
  2. Conquer: Recursively solve each subproblem
  3. Combine: Merge the results in O(n) time

This gives the recurrence T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n), which solves to T(n)=O(nlogn)T(n) = O(n \log n).

📄merge_sort.c
1// O(n log n) Merge Sort
2
3void merge(int arr[], int left, int mid, int right) {
4    // O(n) merge step
5    int n1 = mid - left + 1;
6    int n2 = right - mid;
7
8    int L[n1], R[n2];  // Temporary arrays
9
10    for (int i = 0; i < n1; i++) L[i] = arr[left + i];
11    for (int j = 0; j < n2; j++) R[j] = arr[mid + 1 + j];
12
13    int i = 0, j = 0, k = left;
14    while (i < n1 && j < n2) {
15        if (L[i] <= R[j])
16            arr[k++] = L[i++];
17        else
18            arr[k++] = R[j++];
19    }
20    while (i < n1) arr[k++] = L[i++];
21    while (j < n2) arr[k++] = R[j++];
22}
23
24void merge_sort(int arr[], int left, int right) {
25    if (left < right) {
26        int mid = left + (right - left) / 2;
27
28        merge_sort(arr, left, mid);       // T(n/2)
29        merge_sort(arr, mid + 1, right);  // T(n/2)
30        merge(arr, left, mid, right);      // O(n)
31    }
32    // Recurrence: T(n) = 2T(n/2) + O(n)
33    // Solution: T(n) = O(n log n)
34}

Where O(n log n) Appears

AlgorithmTaskWhy O(n log n)
Merge SortSortingDivide by 2, O(n) merge
Heap SortSortingn insertions/deletions, O(log n) each
Quick Sort (avg)SortingExpected O(n log n) partitioning
FFTSignal processingDivide-and-conquer recursion
Building a balanced BSTData structuren insertions, O(log n) each
Closest pair of pointsGeometryDivide-and-conquer approach

Quick Check

An algorithm makes O(n) passes over the data, and each pass does O(log n) work per element. What is the total complexity?


O(n\u00B2), O(n\u00B3): Polynomial Time

Polynomial time algorithms (O(n^k) for some constant k) are considered "tractable" in complexity theory, but in practice, high-degree polynomials quickly become impractical.

O(n\u00B2): Quadratic Time

Quadratic time typically arises from nested loops where both iterate over the input. Common patterns include:

  • Comparing every element with every other element
  • Simple sorting algorithms (bubble, selection, insertion)
  • Finding all pairs that satisfy some condition
  • Dynamic programming with two dimensions of size n
O(n\u00B2) Quadratic Time Patterns
📄quadratic_time.c
2Outer Loop

Runs n times. For each iteration of this loop, the entire inner loop executes.

3Inner Loop — The Culprit

For EACH of the n outer iterations, this runs n times. Total iterations: n × n = n². This is why nested loops are dangerous!

EXAMPLE
For n = 1000: 1,000,000 iterations. For n = 10,000: 100,000,000 iterations!
4O(1) Per Pair

Each comparison is constant time, but it happens n² times total. The quadratic loop count dominates.

11Triangular Loop

Inner loop starts at i+1, so iterations are: (n-1) + (n-2) + ... + 1 = n(n-1)/2. Still O(n²) after dropping constants!

31 lines without explanation
1// O(n²) Quadratic Time Examples
2
3// 1. Checking all pairs - Classic O(n²)
4bool has_pair_with_sum(int arr[], int n, int target) {
5    for (int i = 0; i < n; i++) {
6        for (int j = i + 1; j < n; j++) {
7            if (arr[i] + arr[j] == target)
8                return true;
9        }
10    }
11    return false;
12}
13
14// 2. Triangular loop - Still O(n²)!
15void print_pairs_once(int arr[], int n) {
16    for (int i = 0; i < n; i++) {
17        for (int j = i + 1; j < n; j++) {  // Starts at i+1
18            printf("(%d, %d)\n", arr[i], arr[j]);
19        }
20    }
21    // Iterations: (n-1) + (n-2) + ... + 1 = n(n-1)/2 = O(n²)
22}
23
24// 3. Bubble Sort - The classic O(n²) sort
25void bubble_sort(int arr[], int n) {
26    for (int i = 0; i < n - 1; i++) {
27        for (int j = 0; j < n - i - 1; j++) {
28            if (arr[j] > arr[j + 1]) {
29                int temp = arr[j];
30                arr[j] = arr[j + 1];
31                arr[j + 1] = temp;
32            }
33        }
34    }
35}

The n\u00B2 Trap

O(n\u00B2) seems acceptable for small n, but growth is deceptive:
  • n = 1,000: 1 million operations (1 ms) \u2014 seems fine
  • n = 10,000: 100 million operations (100 ms) \u2014 getting slow
  • n = 100,000: 10 billion operations (10 seconds) \u2014 unacceptable
  • n = 1,000,000: 1 trillion operations (16 minutes) \u2014 disaster
If your data might grow, avoid O(n\u00B2) when better algorithms exist.

O(n\u00B3): Cubic Time

Cubic time typically comes from three nested loops or matrix operations:

📄cubic_time.c
1// O(n³) Cubic Time Examples
2
3// 1. Naive matrix multiplication
4void matrix_multiply(int A[][N], int B[][N], int C[][N], int n) {
5    for (int i = 0; i < n; i++) {           // n iterations
6        for (int j = 0; j < n; j++) {       // n iterations
7            C[i][j] = 0;
8            for (int k = 0; k < n; k++) {   // n iterations
9                C[i][j] += A[i][k] * B[k][j];
10            }
11        }
12    }
13    // Total: n × n × n = O(n³)
14}
15
16// 2. Floyd-Warshall all-pairs shortest paths
17void floyd_warshall(int dist[][V], int V) {
18    for (int k = 0; k < V; k++) {
19        for (int i = 0; i < V; i++) {
20            for (int j = 0; j < V; j++) {
21                if (dist[i][k] + dist[k][j] < dist[i][j])
22                    dist[i][j] = dist[i][k] + dist[k][j];
23            }
24        }
25    }
26}

O(n\u00B3) becomes impractical quickly. For n = 1,000, you have 1 billion operations (about 1 second). For n = 10,000, that's a trillion operations (about 16 minutes).

Strassen's Algorithm

Matrix multiplication can be done in O(n\u00B2\u00B7\u2078\u2077) using Strassen's algorithm, and theoretically even faster. This shows that the "obvious" O(n\u00B3) isn't always optimal.

O(2\u207F), O(n!): Exponential and Factorial

These complexities represent the boundary between solvable and unsolvable problems at scale. They grow so fast that even small increases in n make problems intractable.

O(2\u207F): Exponential Time

Exponential algorithms typically enumerate all subsets or make binary choices at each step:

2n=2×2××2n times2^n = \underbrace{2 \times 2 \times \cdots \times 2}_{n \text{ times}}
n2ⁿTime at 1B ops/sec
101,0241 µs
201,048,5761 ms
301,073,741,8241 second
401.1 × 10¹²18 minutes
501.1 × 10¹⁵13 days
601.2 × 10¹⁸36 years
1001.3 × 10³⁰Age of universe × 10¹⁰
📄exponential_time.c
1// O(2^n) Exponential Time Examples
2
3// 1. Generate all subsets (power set)
4void generate_subsets(int arr[], int n) {
5    // There are 2^n possible subsets
6    for (int mask = 0; mask < (1 << n); mask++) {
7        printf("{ ");
8        for (int i = 0; i < n; i++) {
9            if (mask & (1 << i))
10                printf("%d ", arr[i]);
11        }
12        printf("}\n");
13    }
14}
15
16// 2. Naive Fibonacci (recursive)
17int fib_naive(int n) {
18    if (n <= 1) return n;
19    return fib_naive(n - 1) + fib_naive(n - 2);
20    // Each call makes 2 calls, creating ~2^n total calls
21    // (Actually O(φ^n) where φ ≈ 1.618)
22}
23
24// 3. Subset sum (brute force)
25bool subset_sum_exists(int arr[], int n, int target) {
26    for (int mask = 0; mask < (1 << n); mask++) {
27        int sum = 0;
28        for (int i = 0; i < n; i++) {
29            if (mask & (1 << i))
30                sum += arr[i];
31        }
32        if (sum == target) return true;
33    }
34    return false;
35}

O(n!): Factorial Time

Factorial is even worse than exponential. It grows so fast that even n = 20 is essentially unsolvable:

n!=n×(n1)×(n2)××2×1n! = n \times (n-1) \times (n-2) \times \cdots \times 2 \times 1
nn!Perspective
5120Easy
103,628,8003.6 ms
12479,001,6000.5 seconds
151.3 × 10¹²21 minutes
202.4 × 10¹⁸77,000 years
251.6 × 10²⁵Age of universe × 10⁶
1009.3 × 10¹⁵⁷More than atoms in observable universe
📄factorial_time.c
1// O(n!) Factorial Time Examples
2
3// Generate all permutations
4void permute(char str[], int l, int r) {
5    if (l == r) {
6        printf("%s\n", str);
7    } else {
8        for (int i = l; i <= r; i++) {
9            swap(&str[l], &str[i]);
10            permute(str, l + 1, r);
11            swap(&str[l], &str[i]);  // Backtrack
12        }
13    }
14    // For string of length n: prints n! permutations
15}
16
17// Traveling Salesman Problem (brute force)
18int tsp_brute_force(int graph[][N], int n) {
19    int vertices[n - 1];
20    for (int i = 0; i < n - 1; i++) vertices[i] = i + 1;
21
22    int min_cost = INT_MAX;
23
24    // Try all permutations of vertices 1 to n-1
25    do {
26        int cost = graph[0][vertices[0]];
27        for (int i = 0; i < n - 2; i++)
28            cost += graph[vertices[i]][vertices[i + 1]];
29        cost += graph[vertices[n - 2]][0];
30
31        if (cost < min_cost) min_cost = cost;
32    } while (next_permutation(vertices, n - 1));
33
34    return min_cost;
35    // Complexity: O((n-1)!) ≈ O(n!)
36}

When You See Exponential/Factorial

If your algorithm is O(2\u207F) or O(n!), you have three options:
  1. Accept small n: If n will always be small (say, n < 20), it might be acceptable
  2. Find a better algorithm: Many exponential problems have polynomial solutions (e.g., dynamic programming for Fibonacci)
  3. Use approximation: Heuristics and approximation algorithms trade optimality for speed

Interactive: Complexity Class Explorer

Use this interactive tool to explore all complexity classes. Adjust the input size, select different classes, and watch how operations scale. Pay attention to when each class becomes impractical.

Complexity Class Explorer
Small (5)Medium (50)Large (100)

Linear Time - O(n)

✓ Tractable
At n = 10:
10 ops
\u2248 10.00 ns

Time grows proportionally with input size. If input doubles, time doubles. You must examine each element once.

Algorithm Examples

  • Linear search
  • Finding max/min
  • Counting elements
  • Array sum
  • Single pass through data

Real-World Analogy

Counting people in a room - you need to count each person exactly once. More people = more counting time.

Growth Comparison

0.009.14182737035810Input Size (n)OperationsO(1)O(log n)O(√n)O(n)O(n log n)

All Classes at n = 10 (1 billion ops/sec)

ClassNotationOperationsTimeStatus
Constant
O(1)1.001.00 ns\u2713
Logarithmic
O(log n)3.323.32 ns\u2713
Square Root
O(√n)3.163.16 ns\u2713
Linear
O(n)1010.00 ns\u2713
Linearithmic
O(n log n)3333.22 ns\u2713
Quadratic
O(n²)100100.00 ns\u26A0
Cubic
O(n³)1.00K1.00 µs\u26A0
Exponential
O(2ⁿ)1.02K1.02 µs\u2717
Factorial
O(n!)3.63M3.63 ms\u2717

Here's another view comparing growth rates visually. Toggle different complexity classes to see how they diverge:

Growth Rate Explorer
0255075100035810Input Size (n)Operations
FunctionComplexityOperations at n=10
Constant
O(1)1
Logarithmic
O(log n)3
Linear
O(n)10
Quadratic
O(n²)100
Try this: Set n to 50 and compare O(n) vs O(n²). Linear needs ~50 operations, but quadratic needs ~2,500! Now imagine n = 1,000,000 (a modest dataset) - linear needs 1M operations, but quadratic needs 1 trillion!

Practical Thresholds

Given a time budget (e.g., "must complete in 1 second") and assuming 10&sup9; operations per second, here are approximate maximum input sizes for each complexity:

ComplexityMax n for 1 secondMax n for 1 minuteMax n for 1 hour
O(1)AnyAnyAny
O(log n)10¹⁰⁰⁰⁰⁰⁰⁰⁰⁰10¹⁰⁰⁰⁰⁰⁰⁰⁰⁰10¹⁰⁰⁰⁰⁰⁰⁰⁰⁰
O(n)10⁹6 × 10¹⁰3.6 × 10¹²
O(n log n)4 × 10⁷2 × 10⁹10¹¹
O(n²)31,623244,9491,897,367
O(n³)1,0003,91515,326
O(2ⁿ)303642
O(n!)121315

Rule of Thumb for Interviews

When given a time constraint in a coding interview, use these guidelines:

Input Size RangeTarget ComplexityTypical Algorithm
n ≤ 10O(n!) or O(2ⁿ)Brute force, all permutations
n ≤ 20O(2ⁿ)Bitmask DP, subset enumeration
n ≤ 500O(n³)Floyd-Warshall, cubic DP
n ≤ 5,000O(n²)Nested loops, quadratic DP
n ≤ 10⁶O(n log n)Sorting-based, divide and conquer
n ≤ 10⁸O(n)Linear scan, hash table
n > 10⁸O(log n) or O(1)Binary search, math formula

Quick Check

A problem has n = 100,000. The naive O(n\u00B2) solution times out. What complexity should you target?


Recognizing Complexity in Code

With practice, you should instantly recognize the complexity of code patterns. Here's a cheat sheet:

Pattern Recognition Guide

Code PatternComplexityKey Indicator
No loops, no recursionO(1)Fixed number of operations
Loop where i doubles: i *= 2O(log n)Iteration count: log n
Loop where i halves: i /= 2O(log n)Halving reduces to log n
Single loop: for i in 0..nO(n)Iteration count: n
Two sequential loopsO(n)O(n) + O(n) = O(n)
Outer loop, inner loop halvingO(n log n)n × log n
Divide in half + O(n) mergeO(n log n)Master theorem
Two nested loops 0..nO(n²)n × n iterations
Loop 0..n, inner 0..iO(n²)1 + 2 + ... + n = O(n²)
Three nested loopsO(n³)n × n × n iterations
Recursive calls on all subsetsO(2ⁿ)2ⁿ subsets
Generating permutationsO(n!)n! permutations

Quick Test: What's the Complexity?

📄quiz.c
1// What is the complexity of each function?
2
3// Function A
4int func_a(int n) {
5    int sum = 0;
6    for (int i = 1; i <= n; i *= 2)  // How many iterations?
7        sum += i;
8    return sum;
9}
10
11// Function B
12void func_b(int n) {
13    for (int i = 0; i < n; i++)       // n iterations
14        for (int j = 0; j < 100; j++) // 100 iterations (constant!)
15            printf("*");
16}
17
18// Function C
19int func_c(int n) {
20    if (n <= 1) return n;
21    return func_c(n/2) + func_c(n/2);  // Two recursive calls
22}
23
24// Function D
25void func_d(int n) {
26    for (int i = 0; i < n; i++)
27        for (int j = i; j < n; j++)
28            printf("%d %d\n", i, j);
29}

Answers

  • A: O(log n) \u2014 i doubles each iteration, so log\u2082(n) iterations
  • B: O(n) \u2014 The inner loop runs 100 times (constant), so 100n = O(n)
  • C: O(n) \u2014 Despite two recursive calls, each only does O(1) work. Total work at level k is 2^k elements of size n/2^k. Sum = O(n).
  • D: O(n\u00B2) \u2014 Triangular pattern: n + (n-1) + ... + 1 = n(n+1)/2 = O(n\u00B2)

Summary

You now have a comprehensive understanding of the complexity class hierarchy:

ClassGrowthPractical LimitExamples
O(1)ConstantAny sizeArray access, hash lookup
O(log n)LogarithmicAny sizeBinary search, BST operations
O(n)Linear~10⁹Linear search, single pass
O(n log n)Linearithmic~10⁸Merge sort, heap sort
O(n²)Quadratic~10⁴Bubble sort, all pairs
O(n³)Cubic~10³Matrix multiply, Floyd-Warshall
O(2ⁿ)Exponential~30Subset enumeration
O(n!)Factorial~12Permutation enumeration

Key Takeaways

  1. The hierarchy matters: O(n log n) is dramatically better than O(n\u00B2) at scale
  2. Polynomial vs exponential: O(n\u00B3) is slow but tractable; O(2\u207F) is intractable for large n
  3. Recognize patterns: Nested loops usually mean O(n\u00B2), halving means O(log n)
  4. Match complexity to input size: Know the practical limits for your time budget
  5. Always ask: Can I do better? If O(n\u00B2) exists, often O(n log n) does too

Exercises

Conceptual Questions

  1. Why is O(n log n) considered the "gold standard" for sorting? What prevents us from sorting faster using comparisons?
  2. Explain why O(1000n) = O(n) but O(n\u00B2) \u2260 O(n), even though 1000 > n for small n.
  3. A problem has input size n = 50. Your boss says an O(2\u207F) solution is "fine since n is small." Is this reasonable? Calculate the operations.
  4. Why does the loop "for (i = n; i > 0; i /= 2)" run O(log n) times, not O(n/2) times?

Complexity Analysis

  1. Determine the complexity:
    📄exercise1.c
    1for (int i = 0; i < n; i++)
    2    for (int j = 0; j < n; j++)
    3        for (int k = 0; k < n; k++)
    4            if (arr[i] + arr[j] + arr[k] == target)
    5                return true;
  2. Determine the complexity:
    📄exercise2.c
    1for (int i = 1; i < n; i *= 2)
    2    for (int j = 0; j < n; j++)
    3        sum += arr[j];
  3. Determine the complexity:
    📄exercise3.c
    1int i = n;
    2while (i > 0) {
    3    for (int j = 0; j < i; j++)
    4        printf("*");
    5    i /= 2;
    6}
  4. A function has the recurrence T(n) = T(n/2) + n. What is its complexity? (Hint: expand the recurrence)

Design Questions

  1. You need to find if any two numbers in an unsorted array sum to a target value. The naive approach is O(n\u00B2). Can you achieve O(n)? (Hint: hash table)
  2. Given a sorted array, finding a single element is O(log n). How would you modify binary search to count all occurrences of an element? What's the complexity?

Solution Hints

  • Ex 1: Three nested loops, each O(n): O(n\u00B3)
  • Ex 2: Outer loop runs log n times, inner runs n times: O(n log n)
  • Ex 3: Work per level: n + n/2 + n/4 + ... = O(n) total (geometric series)
  • Ex 4: T(n) = n + n/2 + n/4 + ... = 2n = O(n)
  • Design 1: Use a hash set. For each element, check if (target - element) exists. O(n).
  • Design 2: Binary search for leftmost and rightmost occurrence. O(log n) total.

In the next section, we'll explore Space Complexity—analyzing not just the time an algorithm takes, but the memory it consumes. You'll learn that sometimes trading time for space (or vice versa) is the key to solving a problem efficiently.