Back propagation in a neural network made with micro:bits |
http://sparse-dense.blogspot.com/2018/06/microbittwo-layer-perceptronxor.html
At that time, since we used the neural network weights completely learned about z = XOR (x, y), the value of z for any input pair (x, y) could be calculated with only one forward propagation.
As before, I use micro:bit neural network again. But at this time, the Backpropagation which is the foundation of Deep Learning is applied. In general, it takes a long time to learn in a neural network, since it requires repetition of forward propagation and backward propagation many times. So, here we use some good (but not complete) weights that we got before from another projects, as initial weight values. Starting from that, we should complete weights update within a small number of iterations, even on the micro:bit neural network. Please check the actual operation with the following video:
This video is also published on Youtube:
https://youtu.be/tsYr01lQ_HY
Weight updates to recognize XOR are required for the four cases of input (1, 1), (1, 0), (0, 1), (0, 0). Successively for all these cases, updates should be repeated until the error (difference between the answer and the evaluated value z) becomes smaller than the tolerance (here, 0.1). However, for the sake of simplicity, we only show weight updates for the first case x = 1, y = 1 in this demonstration. It converged after 5 iterations.
Please note that the neurons (micro:bit) of the Hidden layer and Output layer display eastward arrows or westward arrows depending on whether they are in Forward or in Bacward. Also, after the Backward is over, button A on micro:bit is pressed before Forward starts. This is to ensure the proper timing between receive and send on micro:bit radio.
0 件のコメント:
コメントを投稿