Skip to main content

XNAS: A Regressive/Progressive NAS for Deep Learning

Author(s): Kung, SY

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr17w6759p
Abstract: Deep learning has achieved great and broad breakthroughs in many real-world applications. In particular, the task of training the network parameters has been masterly handled by back-propagation learning. However, the pursuit on optimal network structures remains largely an art of trial and error. This prompts some urgency to explore an architecture engineering process, collectively known as Neural Architecture Search (NAS). In general, NAS is a design software system for automating the search of effective neural architecture. This article proposes an X-learning NAS (XNAS) to automatically train a network’s structure and parameters. Our theoretical footing is built upon the subspace and correlation analyses between the input layer, hidden layer, and output layer. The design strategy hinges upon the underlying principle that the network should be coerced to learn how to structurally improve the input/output correlation successively (i.e., layer by layer). It embraces both Progressive NAS (PNAS) and Regressive NAS (RNAS). For unsupervised RNAS, Principal Component Analysis (PCA) is a classic tool for subspace analyses. By further incorporating teacher’s guidance, PCA can be extended to Regression Component Analysis (RCA) to facilitate supervised NAS design. This allows the machine to extract components most critical to the targeted learning objective. We shall further extend the subspace analysis from multi-layer perceptrons to convolutional neural networks, via introduction of Convolutional-PCA (CPCA) or, more simply, Deep-PCA (DPCA). The supervised variant of DPCA will be named Deep-RCA (DRCA). The subspace analyses allow us to compute optimal eigenvectors (respectively, eigen-filters) and principal components (respectively, eigen-channels) for optimal NAS design of multi-layer perceptrons (respectively, convolutional neural networks). Based on the theoretical analysis, an X-learning paradigm is developed to jointly learn the structure and parameters of learning models. The objective is to reduce the network complexity while retaining (and sometimes improving) the performance. With carefully pre-selected baseline models, X-learning has shown great successes in numerous classification-type and/or regression-type applications. We have applied X-learning to the ImageNet datasets for classification and DIV2K for image enhancements. By applying X-learning to two types of baseline models, MobileNet and ResNet, both the low-power and high-performance application categories can be supported. Our simulations confirm that X-learning is by and large very competitive relative to the state-of-the-art approaches.
Publication Date: 29-Nov-2022
Citation: Kung, SY. (2022). XNAS: A Regressive/Progressive NAS for Deep Learning. ACM Transactions on Sensor Networks, 18 (4), 1 - 32. doi:10.1145/3543669
DOI: doi:10.1145/3543669
ISSN: 1550-4859
EISSN: 1550-4867
Pages: 1 - 32
Language: en
Type of Material: Journal Article
Journal/Proceeding Title: ACM Transactions on Sensor Networks
Version: Final published version. This is an open access article.



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.