– LSMs take the temporal aspect of the input into account
Concept
Reservoir/ Liquid
– large accumulation of recurrent interacting nodes → is stimulated by the input layer – Liquid itself is not trained, but randomly constructed with the help of heuristics – Loops cause a short-term memory effect – preferably a Spiking Neural Network (SNNs) → are closer to biological neural networks than the multilayer Perceptron → can be any type of network that has sufficient internal dynamics
Running State
→ will be extracted by the readout function
– depend on the input streams they’ve been presented
Readout Function
– converts the high-dimensional state into the output
– since the readout function is separated from the liquid, several readout functions can be used with the same liquid
→ so different tasks can be performed with the same input
AutoEncoder – In data science, we often encounter multidimensional data relationships. Understanding and representing these is often not straightforward. But how do you effectively reduce the dimension without reducing the information content?
Unsupervised dimension reduction
One possibility is offered by unsupervised machine learning algorithms, which aim to code high-dimensional data as effectively as possible in a low-dimensional way. If you don’t know the difference between unsupervised, supervised and reinforcement learning, check out this article we wrote on the topic.
What is an AutoEncoder?
The AutoEncoder is an artificial neural network that is used to unsupervised reduce the data dimensions. The network usually consists of three or more layers. The gradient calculation is usually done with a backpropagation algorithm. The network thus corresponds to a feedforward network that is fully interconnected layer by layer.
Types
AutoEncoder types are many. The following table lists the most common variations.
However, the basic structure of all variations is the same for all types.
Basic Structure
Each AutoEncoder is characterized by an encoding and a decoding side, which are connected by a bottleneck, a much smaller hidden layer.
The following figure shows the basic network structure.
During encoding, the dimension of the input information is reduced. The average value of the information is passed on and the information is compressed in such a way. In the decoding part, the compressed information is to be used to reconstruct the original data. For this purpose, the weights are then adjusted via backpropagation. In the output layer, each neuron then has the same meaning as the corresponding neuron in the input layer.
Autoencoder vs Restricted Boltzmann Machine (RBM)
Restricted Boltzmann Machines are also based on a similar idea. These are undirected graphical models useful for dimensionality reduction, classification, regression, collaborative filtering, and feature learning. However, these take a stochastic approach. Thus, stochastic units with a particular distribution are used instead of the deterministic distribution.
RBMs are designed to find the connections between visible and hidden random variables. How does the training work? The hidden biases generate the activations during forward traversal and the visible layer biases generate learning of the reconstruction during backward traversal.
Pretraining
Since the random initialization of weights in neural networks at the beginning of training is not always optimal, it makes sense to pre-train. The task of training is to minimize an error or a reconstruction in order to find the most efficient compact representation for input data.
The method was developed by Geoffrey Hinton and is primarily for training complex autoencoders. Here, the neighboring layers are treated as a Restricted Boltzmann Machine. Thus, a good approximation is achieved and fine-tuning is done with a backpropagation.
A perceptron is a simple binary classification algorithm modeled after the biological neuron and is thus a very simple learning machine. The output function here is determined by the weighting of the inputs and by the thresholds. Perceptrons are used for machine learning as well as for artificial intelligence (AI) applications. If you don’t know the difference between AI, neural networks and machine learning you should read our article on the subject.
What does the learning process look like?
A set of input signals are decomposed into a binary output decision, i.e. zeros or ones. By training with certain input patterns, similar patterns can thus be found in a data set to be analyzed. The following figure shows this learning process schematically.
If a set threshold is exceeded or not reached by weighting all inputs, the state of the neuron output changes. If one now trains a perceptron with given data patterns, the weighting of the inputs changes. The perceptron thus has the ability to learn and solve complex problems by adjusting the weights.
However, a basic requirement to obtain valid results is that the data must be linearly separable.
What are Multilayer Perceptrons (MLP)
A multilayer perceptron corresponds to what is known as a neural network. Perceptrons thus form the neuronal base, which are interconnected in different layers.
The figure below shows a simple three-layer MLP. Each line here represents a different output.
However, neurons of the same layer have no connections to each other. For each signal, the perceptron uses different weights and the output of a neuron is the input vector of a neuron of the next layer. The diversity of classification possibilities increases with the number of layers.
Recurrent Neural Networks vs Feed-Forward Networks
Basically, neural networks are distinguished according to the recurrent and the feed-forward principle.
Recurrent Neural Networks
In the recurrent neural network the neurons are connected to neurons of the same or a preceding layer. Here, a basic distinction is made between three types of feedback. With the direct feedback the own output of a neuron is used as further input. In indirect feedback, on the other hand, the output of a neuron is connected to a neuron of the preceding layers. In the last feedback principle, lateral feedback, the output of a neuron is connected to another neuron of the same layer.
Feed-Forward Networks
In feed-forward networks, on the other hand, the outputs are connected only to the inputs of a subsequent layer. These can be fully connected, then the neurons of a layer are connected to all neurons of the directly following layer. Or short-cuts are formed. Some neurons are then not connected to all neurons of the next layer.