Network Shape
Input Neurons
Hidden Neurons
Output Neurons
Activation Functions
Hidden Layer
Output Layer
Learning Settings
Learning Rate
Current Epoch
Training Data
Network
How it works
The neural network consists of three layers, one input, one hidden and one output layer. Each of these layers contains multiple neurons. All neurons of in a layer are connected to all neurons in the following layer. Every connection has a weight that determines its importance. You can find a schematic sketch of a neuron here.
A hidden or output neuron sums up all incoming connections and applies an activation function to the sum in order to make the network non-linear. There are many different activation functions but you can find a plot with the ones used in this project here.
In order to learn, the network has to be trained on specific data. It recieves input and target values and then computes the error, i.e. the difference between the actual output and the target values. To adjust the weights and thus minimize the error, the error has to be calculated for each neuron. The algorithm used to track these errors through the hidden layers is called backpropagation.