Effect of derivative action on back-propagation algorithms
Özet
Multilayer neural networks using supervised training try to minimize the error between a given correct answer and the ones produced by the network. The weights in the neural network are adjusted at each iteration and after adequate epochs, adjusted weights give results close to correct answers. Besides the current error, accumulated errors from past iterations are also used for updating weights. This resembles the integral action in control theory, but the method took the name momentums in machine learning. Control theory uses one more technique for achieving faster tracking: the derivative action. In this research, we added the missing derivative action to the training algorithm and obtained promising results. The training algorithm with derivative action achieved 3.8 times speedup comparing to the momentum method. © 2019, Springer Nature Switzerland AG.