Towards Meta-learning of Deep Architectures for Efficient Domain Adaptation

Authors: Ali, A.R., Budka, M. and Gabrys, B.

Journal: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Volume: 11671 LNAI

Pages: 66-79

eISSN: 1611-3349

ISBN: 9783030299101

ISSN: 0302-9743

DOI: 10.1007/978-3-030-29911-8_6

Abstract:

This paper proposes an efficient domain adaption approach using deep learning along with transfer and meta-level learning. The objective is to identify how many blocks (i.e. groups of consecutive layers) of a pre-trained image classification network need to be fine-tuned based on the characteristics of the new task. In order to investigate it, a number of experiments have been conducted using different pre-trained networks and image datasets. The networks were fine-tuned, starting from the blocks containing the output layers and progressively moving towards the input layer, on various tasks with characteristics different from the original task. The amount of fine-tuning of a pre-trained network (i.e. the number of top layers requiring adaptation) is usually dependent on the complexity, size, and domain similarity of the original and new tasks. Considering these characteristics, a question arises of how many blocks of the network need to be fine-tuned to get maximum possible accuracy? Which of a number of available pre-trained networks require fine-tuning of the minimum number of blocks to achieve this accuracy? The experiments, that involve three network architectures each divided into 10 blocks on average and five datasets, empirically confirm the intuition that there exists a relationship between the similarity of the original and new tasks and the depth of network needed to fine-tune in order to achieve accuracy comparable with that of a model trained from scratch. Further analysis shows that the fine-tuning of the final top blocks of the network, which represent the high-level features, is sufficient in most of the cases. Moreover, we have empirically verified that less similar tasks require fine-tuning of deeper portions of the network, which however is still better than training a network from scratch.

https://eprints.bournemouth.ac.uk/32528/

Source: Scopus

Towards Meta-learning of Deep Architectures for Efficient Domain Adaptation

Authors: Ali, A.R., Budka, M. and Gabrys, B.

Journal: PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II

Volume: 11671

Pages: 66-79

eISSN: 1611-3349

ISBN: 978-3-030-29910-1

ISSN: 2945-9133

DOI: 10.1007/978-3-030-29911-8_6

https://eprints.bournemouth.ac.uk/32528/

Source: Web of Science (Lite)

Towards Meta-learning of Deep Architectures for Efficient Domain Adaptation

Authors: Raza Ali, A., Budka, M. and Gabrys, B.

Conference: The 16th Pacific Rim International Conference on Artificial Iintelligence

Dates: 26 August-30 July 2019

https://eprints.bournemouth.ac.uk/32528/

Source: Manual

Towards Meta-learning of Deep Architectures for Efficient Domain Adaptation.

Authors: Ali, A.R., Budka, M. and Gabrys, B.

Editors: Nayak, A.C. and Sharma, A.

Journal: PRICAI (2)

Volume: 11671

Pages: 66-79

Publisher: Springer

ISBN: 978-3-030-29910-1

https://eprints.bournemouth.ac.uk/32528/

https://doi.org/10.1007/978-3-030-29911-8

Source: DBLP

Towards Meta-learning of Deep Architectures for Efficient Domain Adaptation

Authors: Ali, A.R., Budka, M. and Gabrys, B.

Conference: 16th Pacific Rim International Conference on Artificial Intelligence

Abstract:

This paper proposes an efficient domain adaption approach using deep learning along with transfer and meta-level learning. The objective is to identify how many blocks (i.e. groups of consecutive layers) of a pre-trained image classification network need to be fine-tuned based on the characteristics of the new task. In order to investigate it, a number of experiments have been conducted using different pre-trained networks and image datasets. The networks were fine-tuned, starting from the blocks containing the output layers and progressively moving towards the input layer, on various tasks with characteristics different from the original task. The amount of fine-tuning of a pre-trained network (i.e.

the number of top layers requiring adaptation) is usually dependent on the complexity, size, and domain similarity of the original and new tasks.

Considering these characteristics, a question arises of how many blocks of the network need to be fine-tuned to get maximum possible accuracy? Which of a number of available pre-trained networks require fine-tuning of the minimum number of blocks to achieve this accuracy? The experiments, that involve three network architectures each divided into 10 blocks on average and five datasets, empirically confirm the intuition that there exists a relationship between the similarity of the original and new tasks and the depth of network needed to fine-tune in order to achieve accuracy comparable with that of a model trained from scratch.

Further analysis shows that the fine-tuning of the final top blocks of the network, which represent the high-level features, is sufficient in most of the cases. Moreover, we have empirically verified that less similar tasks require fine-tuning of deeper portions of the network, which however is still better than training a network from scratch.

https://eprints.bournemouth.ac.uk/32528/

https://www.pricai.org/2019/10-main-page/11-the-16th-pacific-rim-international-conference-on-artificial-intelligence

Source: BURO EPrints