Chapter 2
22 min read
Section 7 of 261

Big-Ω and Big-Θ Notation

Complexity Analysis

Learning Objectives

By the end of this section, you will be able to:

  1. Understand Big-Ω notation as a lower bound on algorithm growth rate and apply it to characterize best-case behavior
  2. Master Big-Θ notation as a tight bound that precisely captures asymptotic growth behavior
  3. Distinguish between notations and know when to use O, Ω, or Θ based on what you're analyzing
  4. Prove asymptotic bounds using formal definitions and limit techniques
  5. Recognize the relationships between all asymptotic notations and apply the Sandwich Theorem
  6. Avoid common misconceptions that conflate notation with best/worst case analysis
Why This Matters: Big-O alone tells only part of the story. To truly understand algorithm performance, you need the complete vocabulary: upper bounds (O), lower bounds (Ω), and tight bounds (Θ). This comprehensive understanding is essential for algorithm comparison, optimization decisions, proving optimality, and succeeding in technical interviews.

The Complete Picture: Beyond Big-O

In the previous section, you learned Big-O notation, which provides an upper bound on how fast a function grows. But consider this: if someone tells you their algorithm is O(n!), is that useful? Technically, even a simple O(n) algorithm is also O(n!) since factorial eventually dominates linear growth. The O(n!) bound is true but useless.

To make meaningful statements about algorithm efficiency, we need a complete toolkit of asymptotic notations:

NotationMeaningIntuitionPrimary Use
O(g(n))At most g(n)Ceiling / Upper boundWorst-case guarantees
Ω(g(n))At least g(n)Floor / Lower boundBest-case, problem lower bounds
Θ(g(n))Exactly like g(n)Tight fit / SandwichPrecise characterization
o(g(n))Strictly less than g(n)Definitely below ceilingProving strict dominance
ω(g(n))Strictly more than g(n)Definitely above floorProving strict dominance

The Historical Context

These notations were developed by mathematicians to describe the asymptotic behavior of functions—how they behave as their input approaches infinity. The notation originated with Paul Bachmann in 1894 (Big-O) and was extended by Edmund Landau in 1909. Donald Knuth popularized all five notations in computer science through his seminal work The Art of Computer Programming.

Knuth's Contribution

Donald Knuth advocated strongly for using Θ notation when bounds are tight, arguing that sloppy use of Big-O (saying "O(n²)" when Θ(n²) is meant) leads to imprecise thinking and communication errors in algorithm analysis.

An Everyday Analogy

Think of estimating travel time from New York to Boston:

  • O(6 hours): "I'll arrive in at most 6 hours" (upper bound—might be much faster)
  • Ω(3 hours): "I'll need at least 3 hours" (lower bound—can't be faster)
  • Θ(4 hours): "It takes about 4 hours, within reasonable variation" (tight bound)

The Θ estimate is most informative because it tells you what to actually expect, not just worst or best case extremes.


Big-Ω Notation: Lower Bounds

Big-Ω (Big-Omega) captures the concept of a lower bound on the growth rate. It tells us that a function grows at least as fast as another function, asymptotically.

Formal Definition

Definition: Big-Ω

We say f(n)=Ω(g(n))f(n) = \Omega(g(n)) if and only if there exist positive constants cc and n0n_0 such that:
f(n)cg(n)for all nn0f(n) \geq c \cdot g(n) \quad \text{for all } n \geq n_0

In plain language: from some point onward (nn0n \geq n_0), the function f(n)f(n) is always at least a constant multiple of g(n)g(n).

Visual Intuition

If f(n)=Ω(g(n))f(n) = \Omega(g(n)), imagine cg(n)c \cdot g(n) as a floor. For large enough n, f(n)f(n) is always above this floor:

Loading animation...

Limit-Based Definition

An equivalent characterization using limits:

f(n)=Ω(g(n))lim infnf(n)g(n)>0f(n) = \Omega(g(n)) \Leftrightarrow \liminf_{n \to \infty} \frac{f(n)}{g(n)} > 0

This says the ratio f(n)/g(n)f(n)/g(n) stays bounded away from zero—ff never becomes negligible compared to gg.

Examples of Big-Ω

Function f(n)Ω ClassProof Sketch
5n² + 3n + 7Ω(n²)For n ≥ 1: 5n² + 3n + 7 ≥ 5n² ≥ 1·n² (c = 1)
3n + 100Ω(n)For n ≥ 1: 3n + 100 ≥ 3n ≥ 1·n (c = 1)
n log n + nΩ(n log n)For n ≥ 2: n log n + n ≥ n log n ≥ 1·n log n
2ⁿΩ(n³)Exponentials dominate all polynomials
n!Ω(2ⁿ)Factorial grows faster than exponential

Quick Check

Which of the following statements is TRUE?

Ω vs Best Case: A Critical Distinction

Do not confuse lower bound with best case! Big-Ω describes how a function grows—it says nothing about best, worst, or average case. You can use Ω to describe the worst-case complexity of an algorithm! For example, merge sort's worst-case time is Ω(n log n)—it must do at least n log n work even in the worst case.

Big-Θ Notation: Tight Bounds

Big-Θ (Big-Theta) captures tight bounds—it says a function grows at exactly the same rate as another function, up to constant factors. This is the most precise asymptotic characterization.

Formal Definition

Definition: Big-Θ

We say f(n)=Θ(g(n))f(n) = \Theta(g(n)) if and only if there exist positive constants c1c_1, c2c_2, and n0n_0 such that:
c1g(n)f(n)c2g(n)for all nn0c_1 \cdot g(n) \leq f(n) \leq c_2 \cdot g(n) \quad \text{for all } n \geq n_0

The function f(n)f(n) is sandwiched between two constant multiples of g(n)g(n).

The Sandwich Theorem

Key Insight: f(n)=Θ(g(n))f(n) = \Theta(g(n)) if and only if f(n)=O(g(n))f(n) = O(g(n)) AND f(n)=Ω(g(n))f(n) = \Omega(g(n)).

This is the sandwich: g(n)g(n) provides both the floor (Ω) and the ceiling (O).

Loading animation...

Limit-Based Definition

Using limits, Big-Θ has an elegant characterization:

f(n)=Θ(g(n))0<limnf(n)g(n)<f(n) = \Theta(g(n)) \Leftrightarrow 0 < \lim_{n \to \infty} \frac{f(n)}{g(n)} < \infty

The limit is a positive, finite constant—meaning ff and gg grow at exactly the same rate asymptotically.

Examples of Big-Θ

Function f(n)Θ ClassWhy It&apos;s Tight
5n² + 3n + 7Θ(n²)lim(5n²+3n+7)/n² = 5 (finite, non-zero)
n log n + 1000nΘ(n log n)n log n dominates; ratio approaches constant
½n(n-1)Θ(n²)Expands to (n²-n)/2; limit is ½
log(n²)Θ(log n)log(n²) = 2 log n; constant factor of 2
2ⁿ⁺¹Θ(2ⁿ)2ⁿ⁺¹ = 2 × 2ⁿ; constant factor of 2

Why Big-Θ is Preferred

Big-Θ provides the most information—it tells you exactly how fast a function grows, not just a bound that might be loose. Always use Θ when you can establish it. Saying "binary search is O(n)" is technically true but misleading—it's actually Θ(log n) in the worst case.

Quick Check

If f(n) = 7n³ + 2n² - 5n + 100, what is the Big-Θ complexity?


Little-o and Little-ω: Strict Bounds

While Big-O and Big-Ω allow the possibility that ff and gg grow at the same rate, the little notations require strict dominance.

Little-o: Strictly Slower

Definition: Little-o

f(n)=o(g(n))f(n) = o(g(n)) if and only if:
limnf(n)g(n)=0\lim_{n \to \infty} \frac{f(n)}{g(n)} = 0

This means ff grows strictly slower than gg. The function gg eventually completely dominates ff.

Little-ω: Strictly Faster

Definition: Little-ω

f(n)=ω(g(n))f(n) = \omega(g(n)) if and only if:
limnf(n)g(n)=\lim_{n \to \infty} \frac{f(n)}{g(n)} = \infty

This means ff grows strictly faster than gg.

Comparison: Big vs Little

Big NotationLittle NotationKey Difference
f = O(g)f = o(g)O allows f ≈ g; little-o requires f << g
f = Ω(g)f = ω(g)Ω allows f ≈ g; little-ω requires f >> g
n² = O(n²) ✓n² = o(n²) ✗A function is not strictly smaller than itself
n = O(n²) ✓n = o(n²) ✓n is strictly dominated by n²

When to Use Little Notations

Use little-o and little-ω when you need to prove strict separation between complexity classes. For example, proving that an algorithm is fundamentally better (not just by a constant factor) uses these notations.

Relationships Between Notations

Understanding how these notations relate to each other is crucial for mastering complexity analysis.

The Fundamental Equivalence

f(n)=Θ(g(n))f(n)=O(g(n)) and f(n)=Ω(g(n))f(n) = \Theta(g(n)) \Leftrightarrow f(n) = O(g(n)) \text{ and } f(n) = \Omega(g(n))

This is the most important relationship: Big-Θ is the intersection of Big-O and Big-Ω.

Symmetry Property

f(n)=O(g(n))g(n)=Ω(f(n))f(n) = O(g(n)) \Leftrightarrow g(n) = \Omega(f(n))

If ff is upper-bounded by gg, then gg is lower-bounded by ff.

Transitivity

All asymptotic notations are transitive:

  • If f=O(g)f = O(g) and g=O(h)g = O(h), then f=O(h)f = O(h)
  • If f=Ω(g)f = \Omega(g) and g=Ω(h)g = \Omega(h), then f=Ω(h)f = \Omega(h)
  • If f=Θ(g)f = \Theta(g) and g=Θ(h)g = \Theta(h), then f=Θ(h)f = \Theta(h)

Hierarchy of Common Functions

The following hierarchy shows how common functions compare asymptotically:

1lognnnnlognn2n32nn!nn1 \prec \log n \prec \sqrt{n} \prec n \prec n \log n \prec n^2 \prec n^3 \prec 2^n \prec n! \prec n^n

Where fgf \prec g means f=o(g)f = o(g) (f grows strictly slower than g).

📝notation_relationships.txt
1Relationship Diagram:
2
3        Functions that are O(g(n))
4    ┌─────────────────────────────────────┐
5    │   (grow at most as fast as g)       │
6    │                                     │
7    │    Functions that are Θ(g(n))       │
8    │  ┌───────────────────────────┐      │
9    │  │  (grow at same rate as g) │      │
10    │  │                           │      │
11    │  │   f(n) ≈ c · g(n)         │      │
12    │  │                           │      │
13    │  └───────────────────────────┘      │
14    │                                     │
15    └─────────────────────────────────────┘
16
17        Functions that are Ω(g(n))
18    ┌─────────────────────────────────────┐
19    │   (grow at least as fast as g)      │
20    │                                     │
21    │    Functions that are Θ(g(n))       │
22    │  ┌───────────────────────────┐      │
23    │  │  (grow at same rate as g) │      │
24    │  │                           │      │
25    │  │   f(n) ≈ c · g(n)         │      │
26    │  │                           │      │
27    │  └───────────────────────────┘      │
28    │                                     │
29    └─────────────────────────────────────┘
30
31    Key: Θ(g(n)) = O(g(n)) ∩ Ω(g(n))

Interactive Exploration

Use this interactive tool to explore how Big-O, Big-Ω, and Big-Θ bound different functions. Adjust the constants c1c_1, c2c_2, and n0n_0 to see when bounds are satisfied.

Asymptotic Bounds Explorer

Visualize how Big-O, Big-Ω, and Big-Θ bound a function. Adjust the constants to see when bounds are satisfied.

Loading animation...

Big-Θ Definition (Tight Bound)

f(n) = Θ(g(n)) means there exist positive constants c₁, c₂, and n₀ such that c₁·g(n) ≤ f(n) ≤ c₂·g(n) for all n ≥ n₀. The function f grows at exactly the same rate as g (up to constants).

Growth Rate Comparison

Compare how different complexity classes grow as input size increases:

Growth Rate Explorer
0255075100035810Input Size (n)Operations
FunctionComplexityOperations at n=10
Constant
O(1)1
Logarithmic
O(log n)3
Linear
O(n)10
Quadratic
O(n²)100
Try this: Set n to 50 and compare O(n) vs O(n²). Linear needs ~50 operations, but quadratic needs ~2,500! Now imagine n = 1,000,000 (a modest dataset) - linear needs 1M operations, but quadratic needs 1 trillion!

Analyzing Real Algorithms

Let's apply all three notations to analyze real algorithms, distinguishing between the notations and cases.

CaseTime ComplexityNotation Used
Best CaseΘ(1)Target found at first position
Worst CaseΘ(n)Target at last position or absent
Average CaseΘ(n)Expected n/2 comparisons

For the algorithm as a whole (considering all cases):

  • O(n): Running time is at most linear (worst case guarantee)
  • Ω(1): Running time is at least constant (best case guarantee)
  • We cannot say Θ(n) for the overall algorithm because best and worst cases differ!

Example 2: Merge Sort

Merge Sort: Θ(n log n) in ALL Cases
🐍merge_sort.py
1Function Signature

Merge sort takes an array and returns a sorted copy. It uses the divide-and-conquer paradigm.

2Base Case

Arrays of size 0 or 1 are already sorted. This is the recursion termination condition.

5Divide Step

Find the midpoint to split the array into two halves. This takes O(1) time.

6Recursive Call (Left)

Recursively sort the left half. Each level of recursion halves the problem size, leading to log n levels.

EXAMPLE
T(n/2) for the left subproblem
7Recursive Call (Right)

Recursively sort the right half. Together with the left call, this creates the tree of recursive calls.

EXAMPLE
T(n/2) for the right subproblem
9Merge Step

Combine the two sorted halves. This merge operation takes Θ(n) time as we examine each element exactly once.

13Merge Loop

Compare elements from both halves, always taking the smaller one. Total comparisons: O(n) per merge.

20Remaining Elements

After one array is exhausted, copy remaining elements. This maintains O(n) merge time.

15 lines without explanation
1def merge_sort(arr):
2    if len(arr) <= 1:
3        return arr
4
5    mid = len(arr) // 2
6    left = merge_sort(arr[:mid])
7    right = merge_sort(arr[mid:])
8
9    return merge(left, right)
10
11def merge(left, right):
12    result = []
13    i = j = 0
14    while i < len(left) and j < len(right):
15        if left[i] <= right[j]:
16            result.append(left[i])
17            i += 1
18        else:
19            result.append(right[j])
20            j += 1
21    result.extend(left[i:])
22    result.extend(right[j:])
23    return result

Analysis: Merge sort has a remarkable property—it's Θ(n log n) in all cases:

CaseTime ComplexityExplanation
Best CaseΘ(n log n)Always divides and merges fully
Worst CaseΘ(n log n)Same division and merge pattern
Average CaseΘ(n log n)Always identical behavior

The recurrence relation is:

T(n)=2T(n/2)+Θ(n)T(n) = 2T(n/2) + \Theta(n)

By the Master Theorem (Case 2): T(n)=Θ(nlogn)T(n) = \Theta(n \log n)

Optimal Sorting

Merge sort is asymptotically optimal for comparison-based sorting. The problem of comparison-based sorting has a lower bound of Ω(n log n)—no comparison-based algorithm can do better than this in the worst case. Merge sort achieves this bound!

Example 3: Binary Search Bounds

Binary Search: Different Bounds for Different Cases
🐍binary_search_bounds.py
1Function Definition

Binary search requires a SORTED array. The complexity varies by case, but we can characterize it with all three notations.

2Complexity Analysis

Note the difference: Big-O gives upper bound, Big-Ω gives lower bound, Big-Θ gives tight bound when best and worst cases match.

10Initialize Pointers

O(1) operation. We track the search space with left and right pointers.

14Main Loop

This loop runs at most log₂(n) times because the search space halves each iteration. Worst case: Θ(log n).

EXAMPLE
For n=1000: at most 10 iterations
18Found Target

Best case termination. If target is exactly at the midpoint on first check, we have Θ(1) time.

20Eliminate Left Half

Each elimination halves the remaining elements. This is why the algorithm is Θ(log n) in the worst case.

19 lines without explanation
1def binary_search(arr, target):
2    """
3    Best case: Θ(1) - target at middle
4    Worst case: Θ(log n) - target at end or absent
5    Average case: Θ(log n)
6
7    For ALL cases:
8      - Lower bound: Ω(1) (at least one comparison)
9      - Upper bound: O(log n) (at most log n comparisons)
10    """
11    left, right = 0, len(arr) - 1
12    comparisons = 0
13
14    while left <= right:
15        mid = (left + right) // 2
16        comparisons += 1
17
18        if arr[mid] == target:
19            return mid, comparisons  # Best case: found immediately
20        elif arr[mid] < target:
21            left = mid + 1
22        else:
23            right = mid - 1
24
25    return -1, comparisons  # Worst case: log n comparisons

Proving Asymptotic Bounds

Let's work through formal proofs for asymptotic bounds using both the definition-based and limit-based approaches.

Proof 1: Big-Ω from Definition

Claim: 3n2+2n+1=Ω(n2)3n^2 + 2n + 1 = \Omega(n^2)

Proof: We need to find constants c>0c > 0 and n0>0n_0 > 0 such that 3n2+2n+1cn23n^2 + 2n + 1 \geq cn^2 for all nn0n \geq n_0.

Choose c=3c = 3 and n0=1n_0 = 1. Then for all n1n \geq 1:

3n2+2n+13n2=3n23n^2 + 2n + 1 \geq 3n^2 = 3 \cdot n^2 \quad \checkmark

Since 2n+102n + 1 \geq 0 for all n1n \geq 1, the inequality holds. ∎

Proof 2: Big-Θ from Definition

Claim: 3n2+2n+1=Θ(n2)3n^2 + 2n + 1 = \Theta(n^2)

Proof: We need both O and Ω bounds.

Upper Bound (O): For n1n \geq 1:

3n2+2n+13n2+2n2+n2=6n23n^2 + 2n + 1 \leq 3n^2 + 2n^2 + n^2 = 6n^2

So with c2=6c_2 = 6 and n0=1n_0 = 1, we have f(n)=O(n2)f(n) = O(n^2).

Lower Bound (Ω): Already proven above with c1=3c_1 = 3.

Since 3n23n2+2n+16n23n^2 \leq 3n^2 + 2n + 1 \leq 6n^2 for all n1n \geq 1, we have f(n)=Θ(n2)f(n) = \Theta(n^2). ∎

Proof 3: Using Limits

The limit approach is often faster:

limn3n2+2n+1n2=limn(3+2n+1n2)=3\lim_{n \to \infty} \frac{3n^2 + 2n + 1}{n^2} = \lim_{n \to \infty} \left(3 + \frac{2}{n} + \frac{1}{n^2}\right) = 3

Since the limit equals 3 (finite and positive), f(n)=Θ(n2)f(n) = \Theta(n^2). ∎

When to Use Each Proof Method

  • Definition-based: Best for simple polynomials; shows explicit constants
  • Limit-based: Often easier for complex expressions; immediately gives tight bounds
  • L'Hôpital's Rule: Useful when limits are indeterminate (0/0 or ∞/∞)

Common Misconceptions

These notations are frequently misunderstood. Let's address the most common mistakes.

Misconception 1: "Big-O Means Worst Case"

Incorrect!

Big-O describes how a function grows. It says nothing about best, worst, or average case. You can use Big-O to describe best-case time!

Correct understanding:

  • The worst-case running time of quicksort is Θ(n2)\Theta(n^2)
  • The best-case running time of quicksort is Θ(nlogn)\Theta(n \log n)
  • Both statements use Big-Θ (or Big-O) to describe the growth rate of each case

Misconception 2: "Big-Ω Means Best Case"

Also Incorrect!

Big-Ω is a lower bound on growth rate, not a best-case analysis. You can use Big-Ω to describe worst-case behavior!

Example: The worst-case time of comparison-based sorting is Ω(n log n). This is a lower bound on the worst case—even in the worst case, any comparison sort must do at least n log n work.

Misconception 3: "O(n²) Means Exactly n² Operations"

No!

Big-O gives only an upper bound. Saying "bubble sort is O(n!)" is technically true but useless. Always aim for the tightest bound—use Big-Θ when possible.

Misconception 4: "Constants Don't Matter"

While 2n22n^2 and 1000n21000n^2 are both Θ(n2)\Theta(n^2), the constant matters for practical performance. Big-Θ ignores constants by definition, but in real systems:

  • A 500× constant factor difference is significant!
  • For small inputs, constants may dominate
  • Cache effects and memory access patterns create real-world constants

Misconception 5: Mixing Up Cases and Bounds

StatementCorrect?Explanation
Quick sort is O(n²)Worst-case upper bound
Quick sort is Θ(n²)Only worst case is Θ(n²); average is Θ(n log n)
Merge sort is Θ(n log n)All cases are Θ(n log n)
Binary search worst case is Ω(log n)Must do at least log n work in worst case

Real-World Applications

Understanding Ω and Θ isn't just academic—it has profound practical implications.

Problem Lower Bounds

Big-Ω is essential for proving that no algorithm can solve a problem faster than a certain rate:

ProblemLower BoundImplication
Comparison sortingΩ(n log n)Merge sort, heap sort are optimal
Finding minimumΩ(n)Must examine every element
Matrix multiplicationΩ(n²)Must at least read all entries
Searching unsorted arrayΩ(n)Linear scan is optimal
Element uniquenessΩ(n log n)Sorting-based approach is optimal

Optimality Proofs

When an algorithm matches the problem's lower bound, we know it's asymptotically optimal:

  • Merge sort: Θ(n log n) matches the Ω(n log n) lower bound for comparison sorting
  • Binary search: Θ(log n) matches the Ω(log n) lower bound for searching sorted arrays
  • Hash table lookup: Θ(1) average case matches the Ω(1) lower bound for key lookup

Optimization Decisions

Knowing tight bounds helps decide where to focus optimization effort:

  • If your algorithm is Θ(n²) and the problem has Ω(n²) lower bound: Focus on constant factors—you can't improve asymptotically.
  • If your algorithm is Θ(n²) but only Ω(n log n) is required: There's room for algorithmic improvement!
  • If you're at the lower bound: Micro-optimizations (cache efficiency, SIMD, parallelization) become worthwhile.

Algorithm Selection

📝algorithm_comparison.txt
1Comparison: Θ(n²) vs Θ(n log n) algorithms
2
3Input Size    Θ(n²) ops       Θ(n log n) ops     Ratio
4──────────────────────────────────────────────────────
5n = 1,000     1,000,000       ~10,000            100×
6n = 10,000    100,000,000     ~133,000           750×
7n = 100,000   10,000,000,000  ~1,660,000         6,000×
8n = 1,000,000 10¹²            ~20,000,000        50,000×
9
10At 1 million operations per second:
11- Θ(n²) for n=1,000,000: 11.5 days
12- Θ(n log n) for n=1,000,000: 20 seconds

Summary

You now have the complete toolkit for asymptotic analysis:

NotationDefinitionUse When
O(g(n))f(n) ≤ c·g(n) for large nYou want an upper bound guarantee
Ω(g(n))f(n) ≥ c·g(n) for large nYou want a lower bound guarantee
Θ(g(n))c₁·g(n) ≤ f(n) ≤ c₂·g(n)You want to characterize exact growth
o(g(n))f(n)/g(n) → 0You want to prove strict dominance
ω(g(n))f(n)/g(n) → ∞You want to prove strict dominance

Key Takeaways

  1. Big-Ω provides lower bounds—the function grows at least this fast
  2. Big-Θ provides tight bounds—the function grows at exactly this rate
  3. Θ = O ∩ Ω—both upper and lower bounds must match
  4. Notation ≠ Case Analysis—O, Ω, Θ describe growth rates, not best/worst/average cases
  5. Always prefer Θ when you can establish it—it's the most informative
  6. Lower bounds apply to problems, telling us the best possible algorithm
  7. Little-o and little-ω prove strict separation between complexity classes

Exercises

Conceptual Questions

  1. Prove that n3+100n2=Θ(n3)n^3 + 100n^2 = \Theta(n^3) using the formal definition (find c1,c2c_1, c_2, and n0n_0).
  2. Is log2n=Θ(log10n)\log_2 n = \Theta(\log_{10} n)? Prove or disprove.
  3. Explain why 2n=O(3n)2^n = O(3^n) but 3nO(2n)3^n \neq O(2^n).
  4. True or False: If f(n)=O(g(n))f(n) = O(g(n)) and f(n)=Ω(g(n))f(n) = \Omega(g(n)), then f(n)=Θ(g(n))f(n) = \Theta(g(n)). Justify your answer.
  5. Give an example of two functions ff and gg where neither f=O(g)f = O(g) nor g=O(f)g = O(f).

Algorithm Analysis

  1. Determine the tight bound (Θ) for this code:
    🐍exercise1.py
    1for i in range(1, n + 1):
    2    for j in range(1, i * i + 1):
    3        # O(1) work
  2. Determine the tight bound for this code:
    🐍exercise2.py
    1i = n
    2while i >= 1:
    3    for j in range(i):
    4        # O(1) work
    5    i = i // 2
  3. An algorithm has best-case time Θ(n) and worst-case time Θ(n³). What can you say about its overall complexity using Big-O and Big-Ω?

Proofs

  1. Prove that n!=ω(2n)n! = \omega(2^n) (strict lower bound) using Stirling's approximation or a direct argument.
  2. Prove that n=o(n)\sqrt{n} = o(n) (strict upper bound) using limits.
  3. Prove or disprove: (logn)logn=O(nn)(\log n)^{\log n} = O(n^n)

Solution Hints

  • Q1: For n ≥ 1: n³ ≤ n³ + 100n² ≤ 101n³. So c₁ = 1, c₂ = 101, n₀ = 1.
  • Q2: Yes! log₂ n = log₁₀ n / log₁₀ 2 = (1/log₁₀ 2) × log₁₀ n. Constant factor!
  • Nested loop Q1: Sum is 1² + 2² + ... + n² = n(n+1)(2n+1)/6 = Θ(n³)
  • Nested loop Q2: Sum is n + n/2 + n/4 + ... ≈ 2n = Θ(n) (geometric series)

In the next section, we'll explore Common Complexity Classes—from O(1) to O(n!) and beyond—and learn to recognize them instantly in real code.