Skip to main content

Regularized Variational Bayesian Learning of Echo State Networks with Delay&Sum Readout

Author(s): Shutin, Dmitriy; Zechner, Christoph; Kulkarni, Sanjeev R; Poor, H Vincent

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1gf0mw5p
Full metadata record
DC FieldValueLanguage
dc.contributor.authorShutin, Dmitriy-
dc.contributor.authorZechner, Christoph-
dc.contributor.authorKulkarni, Sanjeev R-
dc.contributor.authorPoor, H Vincent-
dc.date.accessioned2023-12-24T18:47:43Z-
dc.date.available2023-12-24T18:47:43Z-
dc.date.issued2012-04en_US
dc.identifier.citationShutin, Dmitriy, Zechner, Christoph, Kulkarni, Sanjeev R, Poor, H Vincent. (2012). Regularized Variational Bayesian Learning of Echo State Networks with Delay&Sum Readout. Neural Computation, 24 (4), 967 - 995. doi:10.1162/neco_a_00253en_US
dc.identifier.issn0899-7667-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1gf0mw5p-
dc.description.abstractIn this work, a variational Bayesian framework for efficient training of echo state networks (ESNs) with automatic regularization and delay&sum (D&S) readout adaptation is proposed. The algorithm uses a classical batch learning of ESNs. By treating the network echo states as fixed basis functions parameterized with delay parameters, we propose a variational Bayesian ESN training scheme. The variational approach allows for a seamless combination of sparse Bayesian learning ideas and a variational Bayesian space-alternating generalized expectation-maximization (VB-SAGE) algorithm for estimating parameters of superimposed signals. While the former method realizes automatic regularization of ESNs, which also determines which echo states and input signals are relevant for “explaining” the desired signal, the latter method provides a basis for joint estimation of D&S readout parameters. The proposed training algorithm can naturally be extended to ESNs with fixed filter neurons. It also generalizes the recently proposed expectation-maximization-based D&S readout adaptation method. The proposed algorithm was tested on synthetic data prediction tasks as well as on dynamic handwritten character recognition.en_US
dc.format.extent967 - 995en_US
dc.languageenen_US
dc.language.isoen_USen_US
dc.relation.ispartofNeural Computationen_US
dc.rightsAuthor's manuscripten_US
dc.titleRegularized Variational Bayesian Learning of Echo State Networks with Delay&Sum Readouten_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1162/neco_a_00253-
dc.identifier.eissn1530-888X-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
esn_vb (1).pdf420.31 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.