In this book, we propose novel deterministic RNN training algorithms that adopt a nonmonotone approach. This allows learning behaviour to deteriorate in some iterations; nevertheless the overall learning performance is improved over time. The nonmonotone RNN training methods, which take their theoretical basis from the theory of deterministic nonlinear optimisation, aim at better exploring the search space and enhancing the convergence behaviour of gradient-based methods. They generate nonmonotone behaviour by incorporating conditions that employ forcing functions, which are used to measure the sufficiency of error reduction, and an adaptive window, whose size is informed by estimating the morphology of the error surface locally. The thesis develops nonmonotone 1st- and 2nd-order methods and discusses their convergence properties. The proposed algorithms are applied to training RNNs of various sizes and architectures, namely Feed-Forward Time-Delay networks, Elman Networks and Nonlinear Autoregressive Networks with Exogenous Inputs Networks, in symbolic sequence processing problems. Numerical results show that the proposed nonmonotone learning algorithms train more effectively.
|Number of Pages||256|
|Country of Manufacture||India|
|Product Brand||LAP LAMBERT Academic Publishing|
|Product Packaging Info||Box|
|In The Box||1 Piece|
|Product First Available On ClickOnCare.com||2015-08-14 00:00:00|