Research Repository

Simple and fast calculation of the second-order gradients for globalized dual heuristic dynamic programming in neural networks.

Fairbank, Michael and Alonso, Eduardo and Prokhorov, Danil (2012) 'Simple and fast calculation of the second-order gradients for globalized dual heuristic dynamic programming in neural networks.' IEEE Transactions on Neural Networks and Learning Systems, 23 (10). 1671 - 1676. ISSN 1045-9227

[img]
Preview
Text
IEEE SecondOrderBackpropForVGL.pdf - Accepted Version

Download (302kB) | Preview

Abstract

We derive an algorithm to exactly calculate the mixed second-order derivatives of a neural network's output with respect to its input vector and weight vector. This is necessary for the adaptive dynamic programming (ADP) algorithms globalized dual heuristic programming (GDHP) and value-gradient learning. The algorithm calculates the inner product of this second-order matrix with a given fixed vector in a time that is linear in the number of weights in the neural network. We use a "forward accumulation" of the derivative calculations which produces a much more elegant and easy-to-implement solution than has previously been published for this task. In doing so, the algorithm makes GDHP simple to implement and efficient, bridging the gap between the widely used DHP and GDHP ADP methods.

Item Type: Article
Uncontrolled Keywords: Algorithms, Models, Theoretical, Computer Simulation, Numerical Analysis, Computer-Assisted, Neural Networks, Computer
Divisions: Faculty of Science and Health > Computer Science and Electronic Engineering, School of
Depositing User: Elements
Date Deposited: 14 Apr 2021 13:40
Last Modified: 14 Apr 2021 14:15
URI: http://repository.essex.ac.uk/id/eprint/21300

Actions (login required)

View Item View Item