# Multilayer Perceptron (MLP)

### This is a multilayer perceptron demo for classifying a 2D shape (a star). In this example, there is no training. The neural network, featuring two hidden layers, was hand-crafted to classify this shape.

Show Hidden Layer

Click on the screen to add points. The neural network will automatically identify any points inside the star as blue, and all outside points as red. You can click "Show Hidden Layer" to view the classification lines for the five perceptrons of the first hidden layer. Below is a description of how this network is set up.

Above: a diagram of the network architecture.

#### Network Architecture

This network consists of an input layer, two hidden layers, and a single output. The input is the X, Y location, normalized to range (-1, +1), of a point. The output is binary: +1 if the point lies inside the shape, -1 otherwise.

The first hidden layer, consisting of perceptrons p0-p4 in the image above, has the following weights:

Perceptron | Bias Weight | X Input Weight | Y Input Weigth |
---|---|---|---|

p0 | -0.375 | -3 | 1 |

p1 | -0.125 | 0 | 1 |

p2 | -0.375 | 3 | 1 |

p3 | 0.125 | -0.75 | 1 |

p4 | 0.125 | 0.75 | 1 |

The second hidden layer only has one perceptron, p5, with the following weights:

Perceptron | Bias | p0 | p1 | p2 | p3 | p4 |
---|---|---|---|---|---|---|

p5 | -2 | -1 | -1 | -1 | 1 | 1 |

Each neuron in the network is equipped with a Sigmoid activation function, but the output is scaled to a binary value of -1 if the output is less that 0.5, and +1 if it is 0.5 or greater.

The first hidden layer is responsible for classifying different segments of the star. You can visualize this by checking "Show Hidden Layer" above. Each colored line represents a different classification perceptron. In this case, the X Input Weight acts as the slope of the line, and the Bias Weight is the Y-intercept.

The second hidden layer perceptron combines the outputs of the first hidden layer. The idea is that for any point inside of the star, at least four out of the five first-layer perceptrons must agree that it is on the "inside". For example, p0 classifies inside as -1, since a majority of the star's shape is to the right of the p0 line. It is also the only perceptron that does does not count the upper-left "wing" of the star as being inside the shape. That is, it will classify all points inside the star into the same class, except those points that lie on the upper-left "wing". Accounting for this pattern, the second layer perceptron adds up all the values, and if the sum is positive, the point is inside the shape. Otherwise, it is outside.

The Multilayer Perceptron source code is available under the MIT Licence and can be downloaded here.