Recursion

A recursive method is a method that calls itself. An iterative method is a method that uses a loop to repeat an action. Anything that can be done iteratively can be done recursively, and vice versa. Iterative algorithms and methods are generally more efficient than recursive algorithms.

Recursion is based on two key problem solving concepts: divide and conquer and self-similarity. A recursive solution solves a problem by solving a smaller instance of the same problem. It solves this new problem by solving an even smaller instance of the same problem. Eventually, the new problem will be so small that its solution will either be obvious or known. This solution will lead to the solution of the original problem.

A recursive definition consists of two parts: a recursive part in which the nth value is defined in terms of the (n-1)th value, and a non recursive boundary case or base case which defines a limiting condition. An infinite repetition will result if a recursive definition is not properly bounded. In a recursive algorithm, each recursive call must make progress toward the bound, or base case. A recursion parameter is a parameter whose value is used to control the progress of the recursion towards its bound.

Function call and return in Python uses a last-in-first-out protocol. As each method call is made, a representation of the method call is place on the method call stack. When a method returns, its block is removed from the top of the stack.

Use an iterative algorithm instead of a recursive algorithm whenever efficiency and memory usage are important design factors. When all other factors are equal, choose the algorithm (recursive or iterative) that is easiest to understand, develop, and maintain.

Here is an example of a recursive method that calculates the factorial of n. The base case occurs when n is equal to 0. We know that 0! is equal to 1. Otherwise we use the relationship n! = n * ( n - 1 )!

def fact ( n ):
  if ( n == 0 ):
    return 1
  else:
    return n * fact ( n - 1 )

In mathematics there are recurrence relations that are defined recursively. A recurrence relation defines a term in a sequence as a function of one or more previous terms. One of the most famous of such recurrence sequences is the Fibonacci series. Other than the first two terms in this series, every term is defined as the sum of the previous two terms:

  F(1) = 1
  F(2) = 1
  F(n) = F(n-1) + F(n-2) for n > 2

  1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...
Here is the Python code that generates this series:
  def fib ( n ):
    if ((n == 1) or (n == 2)):
      return 1
    else:
      return fib (n - 1) + fib (n - 2)
Even though the series is defined recursively, the above code is extremely inefficient in determining the terms in a Fibonacci series (why?). An iterative solution works best in this case.

However, there are sorting algorithms that use recursion that are extremely efficient in what they do. One example of such a sorting algorithm is MergeSort. Let us say you have a list of numbers to sort. Then this algorithm can be stated as follows: Divide the list in half. Sort one half, sort the other half and then merge the two sorted halves. You keep dividing each half until you are down to one item. That item is sorted! You then merge that item with another single item and work backwards merging sorted sub-lists until you have the complete list.