About two years into my career, I shipped a feature that caused our product API to crawl. Response times went from a typical 80ms to over 4 seconds. The culprit was not a missing database index or a slow network call. It was a nested loop I had written without thinking about what it would do with 50,000 records instead of the 200 I had tested with locally.
That incident cost me a stressful afternoon and a lot of explaining. It also permanently changed how I read code. Now, before I write a loop, I think about what it does as the input grows. That habit is what Big O notation is really about — not the Greek letter, not the formal definition, but the question: what happens to this code when the data gets bigger?
What Big O Actually Measures
Big O notation describes how the runtime (or memory usage) of an algorithm grows as the input size grows. It does not measure absolute speed — it measures the shape of the growth curve. A constant factor does not change the shape. Two operations or a million operations make no difference to the Big O class. What matters is whether the work doubles when the input doubles, or squares, or stays flat.
Formally: f(n) = O(g(n)) means that beyond some input size n₀, f(n) is always ≤ c · g(n) for some constant c. In practice, it means: drop the constants, keep only the dominant term. O(3n² + 2n + 100) is just O(n²). The 3, the 2, and the 100 are invisible at scale.
The Complexities You Will Actually Encounter
O(1) — Constant Time
The operation takes the same time regardless of how much data there is. Hash table lookups and array index access are the canonical examples. When you call Map.get() in JavaScript or a dictionary lookup in Python, you are getting O(1) — that is the whole point of those data structures.
// O(1) — same cost whether the map has 10 or 10 million entries
const userCache = new Map<string, User>();
const user = userCache.get(userId);
O(log n) — Logarithmic Time
Each step eliminates half the remaining work. Binary search is the textbook example: with one million sorted records, you find any record in at most 20 comparisons. I use this intuition every time I choose a data structure. If something offers O(log n) lookup (like a balanced BST), I know it degrades gracefully at scale even if O(1) is not available.
function binarySearch(arr: number[], target: number): number {
let lo = 0, hi = arr.length - 1;
while (lo <= hi) {
const mid = (lo + hi) >> 1;
if (arr[mid] === target) return mid;
arr[mid] < target ? (lo = mid + 1) : (hi = mid - 1);
}
return -1;
}
O(n) — Linear Time
Work grows proportionally with input size. A single pass over an array is O(n). This is where most of my day-to-day code lives, and I find it completely comfortable up to tens of millions of items — at that point, you start caring about cache efficiency and streaming rather than Big O class.
O(n log n) — Linearithmic Time
The floor for comparison-based sorting. Merge sort, heapsort, and most production sort implementations run here. If you need sorted output and you cannot get it from the database, O(n log n) is the best you can expect without additional constraints on the data.
O(n²) — Quadratic Time
Two nested loops over the same data. This is where I got into trouble. My feature was checking, for every item in a list, whether it existed in another list using .includes(). Array.includes is O(n). Calling it inside a loop makes the whole thing O(n²). With 200 records in testing: imperceptible. With 50,000 records in production: disaster.
// What I wrote — O(n²), works fine in tests, fails in production
function getMatchingItems(source: string[], filter: string[]) {
return source.filter(item => filter.includes(item)); // includes is O(n)!
}
// What I should have written — O(n), scales cleanly
function getMatchingItemsFast(source: string[], filter: string[]) {
const filterSet = new Set(filter); // O(n) to build
return source.filter(item => filterSet.has(item)); // O(1) lookup
}
The fix was one extra line. The performance difference was a factor of 250× at production scale. This is why I now reflexively check what happens inside every loop.
O(2ⁿ) and O(n!) — Exponential and Factorial
Brute-force combinatorics land here. These are effectively unusable beyond very small inputs (n > 25 or so for exponential, n > 12 for factorial). If your algorithm falls into this category, you almost certainly need dynamic programming, memoisation, or a heuristic approximation. These are real engineering problems — route optimisation, scheduling, constraint satisfaction — but the solution is never "just run the brute force."
Space Complexity Is Just as Real
Big O applies to memory too, and this catches people out differently. An in-place sort is O(1) space. Merge sort is O(n) space — it builds auxiliary arrays. Recursive algorithms have O(n) stack space if they recurse n levels deep, which is why deep recursion on large inputs causes stack overflow errors. When I optimise for speed, I always ask what the trade-off is in memory. Sometimes the fastest algorithm is not the one you can afford to run.
A Practical Decision Framework
I apply these rough thresholds in my day-to-day work:
- n < 1,000: Write the simplest, most readable code. Complexity does not matter here. Premature optimisation is a genuine cost.
- 1,000 < n < 100,000: O(n²) starts to sting. Profile before you optimise, but be aware of nested loops.
- 100,000 < n < 10M: O(n) or O(n log n) only. Know which data structures give you O(1) or O(log n) lookups and use them when you have repeated searches.
- n > 10M: Now you are thinking about streaming, batching, indexing, and hardware. Big O is necessary but not sufficient — constant factors and memory access patterns start to matter enormously.
The Pattern That Trips People Up Most
The subtlest O(n²) bugs I have seen are in code that calls a library method inside a loop without knowing what that method does internally. Array.indexOf, Array.includes, and String.indexOf are all O(n). ORM methods that issue a new database query per loop iteration create the classic N+1 problem — which is just O(n²) with network latency added. Any time I see a loop and am not sure whether the body is O(1) or O(n), I stop and check.
Understanding Big O is not about memorising a table of complexities. It is about building the habit of asking, every time you write a loop or call a function, what happens to this code when the data is a thousand times larger. That question, asked consistently, is what separates code that works in development from code that works in production.
Further Reading
- Big-O Cheat Sheet — a compact reference for common algorithm and data structure complexities
- MDN: Map — the O(1) lookup data structure you should be using more
- Introduction to Algorithms (Cormen et al.) — the definitive reference if you want the formal mathematical treatment