Algorithms

An algorithm is a set of rules for carrying out some calculation. A problem (say sorting a list of numbers) will have many instances. An instance, in this case, can be the list {23,15, 67}. An algorithm must work correctly on every instance of the problem it claims to solve. There may be several algorithms to solve the same problem. How do we choose the best one?

The empirical approach is to program the competing algorithms and try them on different instances with the help of a computer. The theoretical approach is to determine mathematically the quantity of resources needed by each algorithm as a function of the size of the instances considered. The resources are computing time and storage space. Computing time being the more critical of the two.

The size of an instance is any integer that in some way measures the number of components in an instance.

Time taken by an algorithm can vary considerably between two different instances of the same size. For example, sorting an already sorted list or list that is sorted in reverse order

We usually consider the worst case, i.e. for a given instance size we consider those instances which requires the most time. The average behavior of an algorithm is much harder to analyze.

We analyze an algorithm in terms of the number of elementary operations that are involved. An elementary operation is one whose execution time is bounded by a constant for a particular machine and programming language. Thus within a multiplicative constant it is the number of elementary operations executed that matters in the analysis and not the exact time.

Since the exact time for an elementary operation is unimportant, we say, that an elementary operation can be executed at unit cost. We use the "Big O" notation for the execution of algorithms. The "Big O" notation gives the asymptotic execution time of an algorithm.

Algorithms can be classified using the "Big O" notation.

Constant time operations

Linear Operations

Sum the values in an array
  sum = 0;
  for item in a;
    sum = sum + item
The number of additions depends on the length of the array. Hence the run time is O(N).

Quadratic Operations

Selection Sort - in this algorithm we start at the first element of the array and go through the array and find the minimum element. We swap the minimum element with the element at the first place. We start at the second position in the array and go through the entire array and find the minimum element in the remaining portion of the array. We swap that minimum element with the element at the second position. We start at the third position and repeat the procedure until we have reached the end of the array.
def selectionSort (a):
  for i in range (len(a) - 1):
    // Find the minimum
    min = a[i]
    minIdx = i

    for j in range (i + 1, len(a)):
      if (a[j] < min):
        min = a[j]
	minIdx = j
   
    // Swap the minimum element with the element at the ith place
    a[minIdx] = a[i]
    a[i] = min

Summation Formulae