The focus of the “Big O” notation is not the operations of standard methods/functions on a program, but rather the exponential growth of operations demanded by new input. Thus , it is not how many methods are running in your software that affects the Big O notation, it is how many operations more will they need to perform as they process larger amounts of data.
In the example by Mosh pictured below, you have two methods running. But the notation for this program is still O(1), meaning that they are both running at a constant speed since they are handling only one piece of data.
The notation would be the same with a single print method handling a single piece of data, or 100 print methods each handling a single piece of data. Because each method does not take longer to process the data. However, that notation changes to O(n), if any print method is given 100 pieces of data. Because now, a method with 100 pieces of data takes x100 longer to fully process the data. The “n” in O(n) represents any amount of added time required past “1” to process each piece of data added to the method.
Therefore the idea is to keep every method and data structure running at O(1), the “1” here means “one operation, regarless of how many pieces of data”. The best way to demonstrate this is, as Mosh does, with arrays. So, assuming an array is traversed from beginning to end, progressively, numerically, linearly:
- Finding an item in index 0 inside an array is equivalent to O(1), because we needed a single comparison to confirm that this is the right item.
- Finding an item in an array in index 2 will be equivalent to O(n), since the number of comparision grew exponentially in direct reference with the number of iems in the array.
- If we could somehow skip indexes 0 and 1 to find the item in index 2, or any larger index, with a single comparision, then our program runs in O(1).
- All our processes should run in O(1) time, if possible.
- Alrogithms solve this problem. And that is the goal of this course.
