As input sizes grow, the exact running time of an algorithm becomes less important than its asymptotic behavior. Asymptotic analysis focuses on the order of growth of running time, allowing us to compare algorithms and choose the most efficient one for large inputs. This approach emphasizes how performance scales, rather than precise execution times.
Big-O notation describes the upper bound on how fast a function grows as the input size (n) becomes large. When analyzing algorithms, we focus on the part of the function that grows fastest because lower-order terms and constant factors become negligible for large inputs.
For example, consider:
$$ f(n) = 7n^3 + 100n^2 - 20n + 6 $$
The dominant term is $7n^3$ , so we say $f(n) = O(n^3)$ . This means that for large (n), the function grows roughly proportional to $(n^3)$ . Technically, we could also write $f(n) = O(n^4)$ or $O(n^5)$, because the function grows no faster than those rates, but the tightest bound is usually the most useful.
Big-O helps us compare algorithms by focusing on how their running times increase as input sizes grow, ignoring insignificant details like smaller terms or constants.
Ω-notation gives a lower bound on how fast a function grows. It tells us that the function grows at least as fast as a certain rate.
For example, in
$$ f(n) = 7n^3 + 100n^2 - 20n + 6, $$
the dominant term is $7n^3$, so $f(n) = Ω(n³)$ — it won’t grow slower than $n³$. It’s also $Ω(n²)$ or $Ω(n)$, since those grow more slowly. In general, $f(n)=Ω(nᶜ)$ for any constant (c ≤ 3).
Θ-notation tells you exactly how fast a function grows, both from above and below, up to constant factors.
So:
Example: