Big-Oh Notation Demystified

I recently started learning Data Structures and Algorithms. I cam across the mathematical definition of Big O notation. It was as follows,

f(n)=O(g(n)) iff f(n)<= C. g(n) for all n0

Now as the Big O says, it measures performance of algorithm by providing order of growth of function.
But here what is exactly f(n), g(n) and how and from where did the C and n0 came into the picture. I am really very much confused here. I want a very easy and clear explanation on this regardless of fancy english explanation, as I am not examining the IELTS interview.

Big O for mathematical functions is slight different than Big O for algorithms (which was popularized by Donald Knuth). For mathematical functions, it is referring to the limits as one of the inputs goes to infinity. For computer science, we are referring to the rate of growth with respect to the size of the input. So the mathematical definition is mostly useless to us.

For computer science we need to see how many iterations or elements we need to inspect / compare to determine if we are scaling at a constant rate, logarithmically, linearly, etc with the size of our input.

If we only need to look at a few of the elements regardless of the size of the input then it is constant time. When we only need to look at log(n) of the input elements then it is logarithmic time. When we need to look at each element a constant number of times, that is linear time. The only other rule at play is that we can ignore factors at lower levels since we only care about the dominant scaling concern. So for our purposes O(n + n^2) = O(n^2) since the quadratic time dominates at larger numbers.

2 Likes