Friday, August 12, 2011

Neural Network (Part 5): The Back Propagation process

Back-propagation

Back propagation is the process by which you move backwards through the neural network to adjust the weights and biases so as to reduce the total error of the network. The total error of the network is essentially the difference between the end results (actual Outputs) and the expected results. If you expected to get a result of 1, but instead got a 0:  you would go back through the network and tweak each of the weights (and bias) values so that your end result was a little bit closer to 1 than before.

The process of back-propagation is such that larger errors and larger weights and biases that create those errors are penalised more than their smaller counterparts. Bigger weights have a bigger influence on the final outcome than smaller weights, and are therefore penalised more for incorrect answers.

After many training cycles, the neural network reaches a stage of equilibrium (not quite, but close enough), whereby the tweaking is insignificant to the final outcome.

If you under-train, then you will get the wrong result more often than desired.
If you over-train, then the neural network will not be able to "think outside the sqaure", so to speak.

So how do you propagate backwards ??



Step 1:  Feed-forward pass through:
Send some data through the network to populate all the variables. This feed-forward pass allows you calculate your actualOUTPUTs, which you will use to compare against your expectedOUTPUTs.



Step 2: Calculate delta-error for the neurons in the last layer (output layer).
The delta-error calculation for the neuron(s) in the last layer of the neural network is a liitle bit different than the other layers. You can work this out once you calculate the actualOUTPUTs from the feedforward pass.

Let  Last Layer Neuron1.deltaError = LLN1.dE
       Last Layer.actualOutput1 = aO1            <--- This is the same as the Neuron1 Output Value
       Last Layer.expectedOutput1 = exO1

  •        LLN1.dE = (aO1) x (1-aO1) x (exO1 - aO1);


Once you have calculated the deltaError for every neuron in the last layer (output layer), you can move onto the next step.



Step 3: Calculate the delta-error for the hidden layer neurons

The hidden layers for this neural network, is any layer in the neural network, that is not the last layer. However, each layer should sit like ducks in a row. And we are now going to calculate the delta error for the second last layer in the neural network. This could in theory be the first layer in the network (if this network only had 2 layers).


HLN = Hidden Layer Neuron, 
LLN = Last Layer Neuron,
aO=actualOUTPUT,
dE=deltaError


HLN.dE = (HLN.aO) x (1-HLN.aO) x (Sum of   [LLN.dE   x   LLN to HLN connection weight])


Keep moving back through the network layers until you reach the 1st layer (ie, you run out of layers).



Step 4: Update the weights of the connections and Bias of neuron.
a) Multiply the neuron's deltaError which was calculated in either step 2 or 3, by the learning rate (0.1), and by the connection's connEntry value.
b) Then add this calculated value (in Step (4a)) to the current weight of the connection.

neuron.connections[i].weight += (learningRate * neuron.connections[i].connEntry * neuron.deltaError);

The bias is like a connection with a constant connEntry of 1, therefore the calculation is

neuron.bias +=  (learningRate * 1 * neuron.deltaError);




Up Next: Neural Network (Part 6):



To go back to the table of contents click here



No comments:

Post a Comment