A learning algorithm for fault tolerant feedforward neural networks
Authors: Nait-Charif, H. and Ito, H.
Journal: IEICE Transactions on Information and Systems
Volume: E80-D
Pages: 21-27
ISSN: 0916-8532
Abstract:A new learning algorithm is proposed to enhance fault tolerance ability of the feedforward neural networks. The algorithm focuses on the links (weights) that may cause errors at the output when they are open faults. The relevances of the synaptic weights to the output error (i.e. the sensitivity of the output error to the weight fault) are estimated in each training cycle of the standard backpropagation using the Taylor expansion of the output around fault-free weights. Then the weight giving the maximum relevance is decreased. The approach taken by the algorithm described in this paper is to prevent the weights from having large relevances. The simulation results indicate that the network trained with the proposed algorithm do have significantly better fault tolerance than the network trained with the standard backpropagation algorithm. The simulation results show that the fault tolerance and the generalization abilities are improved.
Source: Manual
Preferred by: Hammadi Nait-Charif