04 February 2015

Time Complexity and Efficiency Classifications

Time complexity is a measurement of how a given algorithm's efficiency scales relative to a given dataset as the dataset increases towards infinity. Time complexity is not concerned with the actual clock time required to run an algorithm. Time complexity is concerned solely with how the relative time to execute changes when the size of the dataset changes. Big O notation essentially asks the question of

"What is the worst thing that could happen?"
Complexity is usually measured in Big O notation, although there are other less frequently notations that can be used to measure complexity instead:

  • Big Omega Best case
  • Big Theta Tight bound between BigO and Big Omega
  • BigO Worst case

How to Calculate Time Complexity

To calculate time complexity, all we need to do is count the number of comparisons and assignments that are taking place in our program and add them together. However, we are not counting simply the lines of code, we are counting the number of times those lines of code are executed. So if we have a program that iterates through a list of values and adds one to each one, the time complexity would be O(n). That is, the number of operations is roughly proportional to the size of the data set. If we have a loop that iterates through the entire data set, and on every iteration it performed 5 expressions, then we would still have O(n) because we drop constants and round our calculation to the nearest order of magnitude.</p>

function double(array) {
  return array.map(function(element) {
    return element * 2;
  });
}
var test1 = [1,2,3,4,5];
double(test1);

This iterates over our entire data set once. Therefore the result would be O(n)

  • A statement, conditional, or expression 1
  • Recursively splitting a dataset in half on every iteration O(log(n))
  • Iterating over a dataset O(n)
  • Iterating over a nested dataset O(n2)