Welcome to the first post in a new series I’m starting, “From The Vault”. In this series, I’ll be sharing some of the notes, thoughts, and insights I’ve recorded over the course of my studies in Software Engineering at Iowa State and beyond. These notes come from my personal vault (my Obsidian vault).
Today, we’re delving into the world of computational complexity, specifically focusing on Big O, Big Theta, and Big Omega. They are fundamental concepts in the study of algorithm efficiency and performance, and they were covered during my COM S 311 class at Iowa State University.
Note: While these notes are based on content directly from my vault, they have be parsed and re-organized prior to sharing to make more sense within a blog post.
Understanding Asymptotic Efficiency
Asymptotic Efficiency examines how the running time of an algorithm increases with the input size, particularly as the input size approaches infinity. It’s vital for evaluating algorithms beyond just raw execution times, which can vary between different hardware.
The Big Three of Computational Complexity
Big-O Notation (O)
Big-O gives an upper bound on the asymptotic behavior of a function, essentially defining the maximum rate at which the function can grow.
Mathematical Definition: If positive constants c and n0 exist such that ∀n≥n0,f(n)≤c∗g(n), then f∈O(g(n)).
It provides an asymptotic upper bound like ≤
When we discuss an algorithm having O(n) complexity, it means the algorithm takes at most linear time relative to the input size
Size of n based on input type:
Type
Input Size
integer
# of bits or digits
array
# of elements
string
length of string
graph
# of vertices or edges
Big-Omega Notation (Ω)
Big-Omega is about the lower bound, indicating the minimum growth rate of a function.
It hints at the tightest lower bound of a function’s asymptotic behavior, like ≥
Big-Theta Notation (Θ)
Big-Theta provides a tight bound, showing that a function grows precisely at a certain rate.
Definition: Θ(g(n))={f(n)∣There are positivec1,c2,n0such that ∀n≥n0,0≤c1g(n)≤f(n)≤c2g(n)}
It “sandwiches” the function f(n) between c1g(n) and c2g(n), denoting functions with the same growth rate
Simplifying with Examples
Big-O: If f(n)=3n+2, we simplify this to O(n), disregarding constants and lower order terms.
Big-Omega: For f(n)=n2+3n, we can say it’s Ω(n2), suggesting it grows at least as fast as n2.
Big-Theta: If we have f(n)=2n2+3n+1, it simplifies to Θ(n2), indicating it grows exactly as n2 for large values of n.
Examples from Class
Example 1
f(n)=5n2+2n+1 and g(n)=n2
Example 2
f(n)=6n3+18n−38 and g(n)=n3
Claim: f(n)∈O(g(n))
Example 3
n3∈O(n2)
Intuition tells us that n3 grows faster than n2
Claim: n3∈/O(n2)
Proof by contradiction:
Assume n3∈O(n2)
By definition ∃c,n0 such that ∀n≥n0,n3≤cn2 → n∈c
But c is a constant and an arbitrary n cannot be smaller than c
This is a contradiction, so n3∈/O(n2) must be true
Example 4
f(n)=log(n5) and g(n)=log(n)+5
Claim: f(n)∈O(g(n))
Mathematical Rule: log(nk)=klog(n)
Proof:
f(n)=log(n5)=5log(n)≤5log(n)+25=5(log(n)+5)=5g(n)
Choose c=5,n0=1, ∀n≥n0,f(n)≤cg(n)
By definition, f(n)∈O(g(n))
Example 5
f(n)=log(n) and g(n)=n
Claim: f(n)∈O(g(n))
Proof:
Tangent: log10(109)=9log10(10)=9∗1
f(n)=log(n)=log((n1/2)2)=2log(n1/2)=2log(n)
∀n≥1,log(n)≤n⟹log(n)≤n
2log(n)≤2n=2g(n)
Choose c=2,n0=1, ∀n≥n0,f(n)≤cg(n)
By definition, f(n)∈O(g(n))
Extra: Review of log
If n=ax, then x=logan
log(ab)=blog(a)
log(a∗b)=log(a)+log(b)
log(ba)=log(a∗b−1)=log(a)+log(b−1)=log(a)−log(b)
2log2(n)=n
alogb(n)=nlogb(a)
logb(x)=loga(b)loga(x)
Formal Derivations of Runtime
Expectation: Express runtime in Big-O notation
Example 1
for (i =1; i <= n; i++) {print(i)}
Goal: Derive runtime of this loop as a function of n
Analysis:
For each iteration: how many primitive operations are done?
There’s a comparison (i <= n), print, and increment → all of these are constant runtime
Let c be the number of primitive operations being done in one iteration
During each iteration, c primitive operations are done
i
# Primitive Operations
1
c
2
c
3
c
…
…
This sums to c+c+⋯+c or n times
Therefore, the total number of primitive operations is ∑i=1nc=c∗n=O(n)
Thus, the runtime of this algorithm is O(n)
Example 2
for (i =1; i <= n; i++) {for (j =1; j <= n; j++) {// constant # of primitive operations }}
Analysis:
Start with innermost loop → Each iteration consists of a constant number of primitive operations, say c1 operations, from previous example we know that the number of primitive operations is ∑j=1nc1=O(n)
For a single iteration of the outer loop, the number of primitive operations is the inner loop sum + comparison operations + increment etc.
Call the comparison and increment operations for the outer loop c2
Total runtime is ∑i=1n(((∑j=1nc1)+c2) → ∑i+1n(c1n+c2)=∑i=1nc1n+∑i=1nc2=c1n2+c2n
With a polynomial runtime, we clear lower order terms and remove constants, therefore the runtime is O(n2)
Example 3
for (i =1; i <= n; i++) {for (j=1; j <= i; j++) {// constant # of primitive operations }}
Analysis:
Start with the innermost loop → Each iteration consists of a constant number of primitive operations, say c1 operations
However, in this case, the inner loop doesn’t run n times every iteration of the outer loop
Instead, it runs i times. Therefore, for each iteration i of the outer loop, the number of primitive operations the inner loop does is ∑j=1ic1
Now, we can sum this over all n iterations of the outer loop to get the total number of primitive operations: ∑i=1n(∑j=1ic1+c2) , where c2 is the constant runtime from the comparison and increment operations of the outer loop
From this, we can rewrite the sum as: c1(∑i=1ni)+nc2 → since ∑i=1ni=21n(n+1)
Simplifying, we find this to be c12n(n+1)+c2n
In big-O notation, we ignore lower order terms and constants, therefore the runtime is O(n2)
This highlights an important point: a loop running n times nested within another loop running n times does not always result in a runtime of O(n2). In this case the inner loop only runs i times for the ith iteration of the outer loop. The analysis still results in O(n2), but the actual number of operations is less than the previous example where both loops ran n times.
I must admit, these notes, although relatively comprehensive, have room for improvement. So far, I’ve found the study of Big O, Big Theta, and Big Omega challenging when understood via proofs.
My plan is to revisit and add to these notes as my understanding of the subject continues to grow. This is a journey, and I believe in the power of constant learning and improvement. Sharing these notes publicly isn’t just about providing a resource to others; it’s also about motivating myself.
By putting my notes out there, I’m compelled to take better, more deliberate notes during class. It’s a commitment to myself to strive for clarity in understanding and precision in note-taking. I hope that my journey will inspire you in some way and lead to shared growth.