A Meta-Reinforcement Learning Approach to Optimize Parameters and Hyper-parameters Simultaneously

Authors: Ali, A.R., Budka, M. and Gabrys, B.

Journal: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Volume: 11671 LNAI

Pages: 93-106

eISSN: 1611-3349

ISBN: 9783030299101

ISSN: 0302-9743

DOI: 10.1007/978-3-030-29911-8_8

Abstract:

In the last few years, we have witnessed a resurgence of interest in neural networks. The state-of-the-art deep neural network architectures are however challenging to design from scratch and requiring computationally costly empirical evaluations. Hence, there has been a lot of research effort dedicated to effective utilisation and adaptation of previously proposed architectures either by using transfer learning or by modifying the original architecture. The ultimate goal of designing a network architecture is to achieve the best possible accuracy for a given task or group of related tasks. Although there have been some efforts to automate network architecture design process, most of the existing solutions are still very computationally intensive. This work presents a framework to automatically find a good set of hyper-parameters resulting in reasonably good accuracy, which at the same time is less computationally expensive than the existing approaches. The idea presented here is to frame the hyper-parameter selection and tuning within the reinforcement learning regime. Thus, the parameters of a meta-learner, RNN, and hyper-parameters of the target network are tuned simultaneously. Our meta-learner is being updated using policy network and simultaneously generates a tuple of hyper-parameters which are utilized by another network. The network is trained on a given task for a number of steps and produces validation accuracy whose delta is used as reward. The reward along with the state of the network, comprising statistics of network’s final layer outcome and training loss, are fed back to the meta-learner which in turn generates a tuned tuple of hyper-parameters for the next time-step. Therefore, the effectiveness of a recommended tuple can be tested very quickly rather than waiting for the network to converge. This approach produces accuracy close to the state-of-the-art approach and is found to be comparatively less computationally intensive.

https://eprints.bournemouth.ac.uk/32529/

Source: Scopus

A Meta-Reinforcement Learning Approach to Optimize Parameters and Hyper-parameters Simultaneously

Authors: Ali, A.R., Budka, M. and Gabrys, B.

Journal: PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II

Volume: 11671

Pages: 93-106

eISSN: 1611-3349

ISBN: 978-3-030-29910-1

ISSN: 0302-9743

DOI: 10.1007/978-3-030-29911-8_8

https://eprints.bournemouth.ac.uk/32529/

Source: Web of Science (Lite)

A Meta-Reinforcement Learning Approach to Optimize Parameters and Hyper-parameters Simultaneously

Authors: Raza Ali, A., Budka, M. and Gabrys, B.

Conference: The 16th Pacific Rim International Conference on Artificial Iintelligence

Dates: 26-30 August 2019

https://eprints.bournemouth.ac.uk/32529/

Source: Manual

A Meta-Reinforcement Learning Approach to Optimize Parameters and Hyper-parameters Simultaneously.

Authors: Ali, A.R., Budka, M. and Gabrys, B.

Editors: Nayak, A.C. and Sharma, A.

Journal: PRICAI (2)

Volume: 11671

Pages: 93-106

Publisher: Springer

ISBN: 978-3-030-29910-1

https://eprints.bournemouth.ac.uk/32529/

https://doi.org/10.1007/978-3-030-29911-8

Source: DBLP

A Meta-Reinforcement Learning Approach to Optimize Parameters and Hyper-parameters Simultaneously.

Authors: Ali, A.R., Budka, M. and Gabrys, B.

Conference: 16th Pacific Rim International Conference on Artificial Intelligence

Abstract:

In the last few years, we have witnessed a resurgence of interest in neural networks. The state-of-the-art deep neural network architectures are however challenging to design from scratch and requiring computationally costly empirical evaluations. Hence, there has been a lot of research effort dedicated to effective utilisation and adaptation of previously proposed architectures either by using transfer learning or by modifying the original architecture. The ultimate goal of designing a network architecture is to achieve the best possible accuracy for a given task or group of related tasks. Although there have been some efforts to automate network architecture design process, most of the existing solutions are still very computationally intensive. This work presents a framework to automatically find a good set of hyper-parameters resulting in reasonably good accuracy, which at the same time is less computationally expensive than the existing approaches. The idea presented here is to frame the hyper-parameter selection and tuning within the reinforcement learning regime. Thus, the parameters of a meta-learner, RNN, and hyper-parameters of the target network are tuned simultaneously. Our meta-learner is being updated using policy network and simultaneously generates a tuple of hyper-parameters which are utilized by another network. The network is trained on a given task for a number of steps and produces validation accuracy whose delta is used as reward. The reward along with the state of the network, comprising statistics of network’s final layer outcome and training loss, are fed back to the meta-learner which in turn generates a tuned tuple of hyper-parameters for the next time step. Therefore, the effectiveness of a recommended tuple can be tested very quickly rather than waiting for the network to converge. This approach produces accuracy close to the state-of-the-art approach and is found to be comparatively less computationally intensive.

https://eprints.bournemouth.ac.uk/32529/

https://www.pricai.org/2019/10-main-page/11-the-16th-pacific-rim-international-conference-on-artificial-intelligence

Source: BURO EPrints