A perceptron is a simple binary classification algorithm modeled after the biological neuron and is thus a very simple learning machine. The output function here is determined by the weighting of the inputs and by the thresholds.
Perceptrons are used for machine learning as well as for artificial intelligence (AI) applications. If you don’t know the difference between AI, neural networks and machine learning you should read our article on the subject.
Table of Contents
What does the learning process look like?
A set of input signals are decomposed into a binary output decision, i.e. zeros or ones.
By training with certain input patterns, similar patterns can thus be found in a data set to be analyzed.
The following figure shows this learning process schematically.
If a set threshold is exceeded or not reached by weighting all inputs, the state of the neuron output changes.
If one now trains a perceptron with given data patterns, the weighting of the inputs changes.
The perceptron thus has the ability to learn and solve complex problems by adjusting the weights.
However, a basic requirement to obtain valid results is that the data must be linearly separable.
What are Multilayer Perceptrons (MLP)
A multilayer perceptron corresponds to what is known as a neural network. Perceptrons thus form the neuronal base, which are interconnected in different layers.
The figure below shows a simple three-layer MLP. Each line here represents a different output.
However, neurons of the same layer have no connections to each other.
For each signal, the perceptron uses different weights and the output of a neuron is the input vector of a neuron of the next layer.
The diversity of classification possibilities increases with the number of layers.
Recurrent Neural Networks vs Feed-Forward Networks
Basically, neural networks are distinguished according to the recurrent and the feed-forward principle.
Recurrent Neural Networks
In the recurrent neural network the neurons are connected to neurons of the same or a preceding layer.
Here, a basic distinction is made between three types of feedback. With the direct feedback the own output of a neuron is used as further input. In indirect feedback, on the other hand, the output of a neuron is connected to a neuron of the preceding layers.
In the last feedback principle, lateral feedback, the output of a neuron is connected to another neuron of the same layer.
In feed-forward networks, on the other hand, the outputs are connected only to the inputs of a subsequent layer. These can be fully connected, then the neurons of a layer are connected to all neurons of the directly following layer.
Or short-cuts are formed. Some neurons are then not connected to all neurons of the next layer.