A new framework for fine tuning of deep networks
Authors: Wani, M.A. and Afzal, S.
Journal: Proceedings - 16th IEEE International Conference on Machine Learning and Applications, ICMLA 2017
Volume: 2017-December
Pages: 359-363
DOI: 10.1109/ICMLA.2017.0-135
Abstract:Very often training of deep neural networks involves two learning phases: Unsupervised pretraining and supervised fine tuning. Unsupervised pretraining is used to learn the parameters of deep neural networks, while as supervised fine tuning improves upon what has been learnt in the pretraining stage. The predominant algorithm that is used for supervised fine tuning of deep neural networks is standard backpropagation algorithm. However, in the field of shallow neural networks, a number of modifications to backpropagation algorithm have been proposed that have improved the performance of trained model. In this paper we propose a hybrid approach that integrates gain parameter based backpropagation algorithm and the dropout technique and evaluate its effectiveness in the fine tuning of deep neural networks on three benchmark datasets. The results indicate that the proposed hybrid approach performs better fine tuning than backpropagation algorithm alone.
Source: Scopus
A New Framework for Fine Tuning of Deep Networks
Authors: Wani, M.A. and Afzal, S.
Journal: 2017 16TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA)
Pages: 359-363
DOI: 10.1109/ICMLA.2017.0-135
Source: Web of Science (Lite)