What is Big O Notation?
Big O notation is a symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions.
Basically, it tells you how fast a function grows or declines.
We can express complexity using Big O Notation.
For a problem of size n:
- a constant-time algorithm is “order 1”: O(1)
- a linear-time algorithm is “order n”: O(n)
- a quadratic-time algorithm is “order n squared”: O(n^2)
Rating: This is the best.
Description: The algorithm always takes the same amount of time, regardless of how much data there is.
Example: Looking up an element of an array by its index
Rating: Pretty great.
Description: These kinds of algorithms halve the amount of data with each iteration. If you have 100 items, it takes about 7 steps to find the answer. With 1,000 items, it takes 10 steps. And 1,000,000 items only take 20 steps. This is super fast even for large amounts of data.
Example: Binary Search
Rating: Good performance.
Description: If you have 100 items, this does 100 units of work. Doubling the number of items makes the algorithm take exactly twice as long (200 units of work).
Example: Linear Search
O(n log n)
Rating: Decent performance.
Description: This is slightly worse than linear but not too bad.
Rating: Kinda slow.
Description: If you have 100 items, this does 100^2 = 10,000 units of work. Doubling the number of items makes it four times slower (because 2 squared equals 4).
Rating: Poor performance.
Description: If you have 100 items, this does 100^3 = 1,000,000 units of work. Doubling the input size makes it eight times slower.
Rating: Very poor performance.
Description: You want to avoid these kinds of algorithms, but sometimes you have no choice. Adding just one bit to the input doubles the running time.
Rating: Intolerably slow.
Description: It literally takes a million years to do anything.