Classes of Algorithms

Brute Force

Another name for brute force is exhaustive search. In these algorithms you consider every possible solution in the solution domain to find the optimal solution. Depending on the type of problem that you are doing, if you have n items to consider you will be either doing n! searches (Permutations) or 2n searches (Combinations). Brute force algorithms are simple to implement but computationally intensive to run. They are reasonable solutions when n is small but even for moderately large values of n the solutions become too intensive to produce results in a reasonable time frame. One advantage of the the brute force algorithms is that they give the optimal solution.

Traveling Salesman Problem: A salesman has to visit n cities. Going from any one city to another has a certain cost - think cost of airline or railway ticket or gas cost. Map a route that has the least cost that the salesman can follow so that he visits each city exactly once and he returns to the city that he started from. If each city is connected to every other city directly there are n! routes to consider.

Knapsack Problem: Given a set of items that each have a weight and value, the problem is to fill a knapsack that has a weight capacity with items of the most value. In the brute force algorithm you will consider 2n combinations. You get the set of combinations that do not exceed the capacity of the knapsack. The combination with the largest value in that set is the optimal solution.

Greedy Algorithms

Greedy Algorithms are simple, straightforward and short sighted. They are easy to implement and sometimes produce results that we can live with. In a greedy algorithm you are always looking for the immediate gain without considering the long term effect. Even though you get short term gains with a greedy algorithm, it does not always produce the optimal solution.

Making Change: Supposing you have an unlimited supply of dollar coins (100 cents), quarters (25 cents), dimes (10 cents), nickels (5 cents), and pennies (1 cent). Our problem is to give change to a customer with the smallest number of coins. With the greedy algorithm we always give the largest denomination coin available without going over the amount that has to be paid. This algorithm is considered "greedy" because at each step it looks for the largest denomination of coin to return.

Minimal Spanning Tree: Imagine a set V of towns. They need to be connected directly by telephone cabling. The cost of laying down the cabling between towns vary. Let E be the set of cost of laying down cabling between any two towns. The problem is to find the minimum cost of laying down cabling so that all the towns are directly connected. Kruskal's algorithm that solves this problem is a greedy algorithm. Here are the steps in that algorithm:

The beauty about Kruskal's algorithm is not only is it greedy and therefore easy to implement but also it does give the optimal solution.

Knapsack Problem: There is a greedy algorithm solution to the knapsack problem. In this algorithm you sort the items into a list in order of decreasing value to weight ratio. You then keep adding the items from this sorted list until you reach the weight limit.

Divide and Conquer

Divide and conquer are extremely efficient because the problem space or domain is decreased significantly with each iteration. A great example of this algorithm is binary search. After each unsuccessful comparison with the middle element in the array, we divide the search space in half. The algorithm converges extremely rapidly.

There are some recursive algorithms that make good use of the divide and conquer technique. In fact, recursion is based on two key problem solving concepts - divide and conquer and self similarity. A recursive solution solves a problem by solving a smaller instance of the same problem. It solves this new problem by solving an even smaller instance of the same problem. Eventually, the new problem will be so small that its solution will either be obvious or known. And then we work backwards to solve the original problem.

A recursive definition consists of two parts: a recursive part that defines the solution with a smaller instance of the problem and a non-recursive boundary case or base case that defines a limiting condition. There are two prime examples of this process - merge sort and quick sort.

Merge Sort: With merge sort, we divide the array that we want to sort in roughly two equal halves. We recursively sort the two halves and then merge the two sorted halves. We stop the process of dividing into half when we reach one element.

Dynamic Programming

Divide and conquer is a top down approach to solve a problem. We start with the largest instance of the problem that we continually decrease in size until we reach the base case. In dynamic programming we start with the simplest case and work systematically to the values we need.

Binomial Coefficients: To find the binomial coefficients of (a + b)n we create the Pascal's triangle starting at 1 which corresponds to n = 0. We then work line by line until we reach the value of n that we are interested in. Let us say, we want the binomial coefficients when n = 8.

n	coefficients
0	1
1	1 1
2	1 2 1
3	1 3 3 1
4	1 4 6 4 1
5 	1 5 10 10 5 1
6 	1 6 15 20 15 6 1
7	1 7 21 35 35 21 7 1
8 	1 8 28 56 70 56 28 8 1