Big O Notation - 1

By Hikmet Cakir

Big O Notation - 1

What is the Big O Notation ? According to Wikipedia, Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.


If we want a shortly definition about Big O Notation, In my opinion we can say “Big O notation is special notation that tells you how fast an algorithm is.”

For example, Suppose we have a method. This code block takes a integer number(“n”) and it sums from zero to given integer values.

int count = 0; // O(1)
for(int i=0; i < n; i++) { // O(n)
    count += i;      // O(1)
}
// Result O(n)

It will take “n” operation. The run time in Big O Notation is O(n). Did you realize I said anything about seconds or minute? I didn’t say about second or minutes etc. because Big O notation says nothing about seconds It just lets you compare the number of operations. It tells you how fast the algorithm grows.


Big O notation establishes a worst case run time. For example we talked a method at the top. We calculated these values O(1), O(n) and O(1) then we said result is O(n). Because run time in worst case depends our given integer value for this scenario


In this following table showing “n” the size of the input complexities ordered in from smallest to largest


Notation             Name
O(1)                 Constant Time
O(logn)              Logarithmic Time
O([logn]^c)          Polylogarithmic Time
O(n)                 Linear Time
O(nlogn)             Linearithmic Time
O(n2)                Quadric Time
O(n^2)               Cubic Time
O(n^c), c > 1        Polinomial Time
O(c^n)               Geometric Time
O(n!)                Factorial Time
O(n^n)               Geocombinator Time

Calculate To Non Recursive Algorithm Complexity


We have four rule for calculation. I shared these rules in following table


1. One assignment command related to assigment operation has constant time complexity
2. Loops related to array size has O(n) time complexity
3. Summing Rule : Arbitrary Loops and program pieces' time complexities equal to these expressions summing value. Small values can throw so time complexity equals to most bigger values
4. Multiply Rule : Nested loops time complexity equals to nested loops' multiply 

Now I think we have to some exercises for understand. What is the time complexity in following box?


// Example-1
int i = 0;
while( i < 1000 ) {
     i = i + 1;
}
// Result O(1)

// Example-2
for(int i = 0; i < n; i++) {
        i = i + 1;
}
// Result O(n)
// Example-3
for(int i = 0; i < n; i++) // O(n)
     for(int j = 0; j < n; j++) // O(n)
// Result O(n^2)
// Example-4
for(int i = 0; i < n; i++) // O(n)
     for(int j = i; j < n; j++) // O(n)
// Again Result O(n^2)
// Final Example-5
int i = 0;
while( i < n) {
     j = 0;
     while(j < 3*n){
        j = j + 1;
     }
     j = 0;
     while(j < 2*n){
        j = j + 1;
     }
     i = i + 1;
}
// f(n) = n * (3n + 2n) = 5n^2
// O(f(n)) = O(n^2)

I think we are done for now. We talked about big o notation, time complexity and non recursive situation and we said anything about recursive so when it’s comes to recursive approach. We must understand to “Master Theorem”. According to Master Theorem, recursive relation does not resolve and it evaluations as asimptotic in O() notation. I think we should talk to this approach in another article because this approach is confusing. If we start to approach, probably this approach will be headache.


The resources I used to create this story:

Teoriden Uygulamalara Algoritmalar — Prof. Dr. Vasif Nabiyev

Grokking Algorithms: An Illustrated Guide for Programmers and Other Curious People — Aditya Bhargava

I used exercises of this video https://www.youtube.com/watch?v=zUUkiEllHG0