Asymptotic Notations, Big Oh, Omega and Theta Notation

Asymptotic analysis of an algorithm refers to defining the mathematical boundation/framing of its run-time performance. Using asymptotic analysis, we can very well conclude the best case, average case, and worst case scenario of an algorithm.

Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to work in a constant time. Other than the "input" all other factors are considered constant.

Asymptotic analysis refers to computing the running time of any operation in mathematical units of computation.

For example, the running time of one operation is computed as f(n) and may be for another operation it is computed as g(n2).

This means the first operation running time will increase linearly with the increase in n and the running time of the second operation will increase exponentially when n increases.

Similarly, the running time of both operations will be nearly the same if n is significantly small.

The time required by an algorithm falls under three types:

  • Best Case − Minimum time required for program execution.
  • Average Case − Average time required for program execution.
  • Worst Case − Maximum time required for program execution.

Asymptotic Notations

Following are the commonly used asymptotic notations to calculate the running time complexity of an algorithm.

  • Big Oh(Ο) Notation
  • Omega(Ω) Notation
  • Theta(θ) Notation

Big Oh Notation, Ο

O’ is the representation for Big-O notation. Big-O is the method used to express the upper bound of the running time of an algorithm.

It is used to describe the performance or time complexity of the algorithm. Big-O specifically describes the worst-case scenario and can be used to describe the execution time required or the space used by the algorithm.

Table gives some names and examples of the common orders used to describe functions. These orders are ranked from top to bottom.

Time ComplexityExamples
O(1) - ConstantAdding to the front of a linked list
O(log n) - LogarithmicFinding an entry in a sorted array
O(n) - LinearFinding an entry in an unsorted array
O(n log n) - LinearithmicSorting ‘n’ items by ‘divide-and-conquer’
O(n2) - QuadraticShortest path between two nodes in a graph
O(n3) - CubicSimultaneous linear equations
O(2n) - ExponentialThe Towers of Hanoi problem

Big-O notation is generally used to express an ordering property among the functions.

This notation helps in calculating the maximum amount of time taken by an algorithm to compute a problem. Big-O is defined as.

f(n) ≤ c ∗ g(n)

where, n can be any number of inputs or outputs and f(n) as well as g(n) are two non-negative functions.

These functions are true only if there is a constant c and a non-negative integer n0 such that, n ≥ n0.

The Big-O can also be denoted as f(n) = O(g(n)), where f(n) and g(n) are two non-negative functions and f(n) < g(n) if g(n) is multiple of some constant c.

The graphical representation of f(n) = O(g(n)) is shown in figure below, where the running time increases considerably when n increases.

Big-Oh Notation

Let us take an example to understand the Big-O notation more clearly.

For Example:

Consider function f(n) = 2(n)+2 and g(n) = n2.

We need to find the constant c such that f(n) ≤ c ∗ g(n).

Let n = 1,

then f(n) = 2(n)+2 = 2(1)+2 = 4

g(n) = n2 = 12 = 2

Here, f(n)>g(n)

Let n = 2,

then f(n) = 2(n)+2 = 2(2)+2 = 6

g(n) = n2 = 22 = 4

Here, f(n)>g(n)

Let n = 3,

then f(n) = 2(n)+2 = 2(3)+2 = 8

g(n) = n2 = 32 = 9

Here, f(n)<g(n)

Thus, when n is greater than 2, we get f(n)<g(n). In other words, as n becomes larger, the running time increases considerably.

This concludes that the Big-O helps to determine the ‘upper bound’ of the algorithm’s run-time.


Omega Notation, Ω

‘Ω’ is the representation for Omega notation. Omega describes the manner in which an algorithm performs in the best case time complexity.

This notation provides the minimum amount of time taken by an algorithm to compute a problem.

Thus, it is considered that omega gives the "lower bound" of the algorithm's run-time. Omega is defined as.

f(n) ≥ c ∗ g(n)

Where, n is any number of inputs or outputs and f(n) and g(n) are two non-negative functions.

These functions are true only if there is a constant c and a non-negative integer n0 such that n>n0.

Omega Notation

For Example:

Consider function f(n) = 2n2+5 and g(n) = 7n.

We need to find the constant c such that f(n) ≥ c ∗ g(n).

Let n = 0,

then, f(n) = 2n2+5 = 2(0)2+5 = 5

g(n) = 7(n) = 7(0) = 0

Here, f(n)>g(n)

Let n = 1,

then, f(n) = 2n2+5 = 2(1)2+5 = 7

g(n) = 7(n) = 7(1) = 7

Here, f(n)=g(n)

Let n = 2,

then, f(n) = 2n2+5 = 2(2)2+5 = 13

g(n) = 7(n) = 7(2) = 14

Here, f(n)<g(n)

Thus, for n=1, we get f(n) ≥ c ∗ g(n).

This concludes that Omega helps to determine the "lower bound" of the algorithm's run-time.


Theta Notation, θ

'θ' is the representation for Theta notation. Theta notation is used when the upper bound and lower bound of an algorithm are in the same order of magnitude. Theta can be defined as.

c1 ∗ g(n) ≤ f(n) ≤ c2 ∗ g(n) for all n>n0

Where, n is any number of inputs or outputs and f(n) and g(n) are two nonnegative functions.

These functions are true only if there are two constants namely, c1, c2, and a non-negative integer n0.

Theta Notation

Let us take an example to understand the Theta notation more clearly.

For Example:

Consider function f(n) = 4n + 3 and g(n) = 4n for all n ≥ 3 and f(n) = 4n + 3 and g(n) = 5n for all n ≥ 3.

Let n = 3

f(n) = 4n + 3 = 4(3)+3 = 15

g(n) = 4n =4(3) = 12

and,

f(n) = 4n + 3 = 4(3)+3 = 15

g(n) = 5n =5(3) = 15

and,

here, c1 is 4, c2 is 5 and n0 is 3.

Thus, from the above equation we get c1 g(n) f(n) c2 g(n). This concludes that Theta notation depicts the running time between the upper bound and lower bound.