The RProp algorithm is a supervised learning method for training multi layered neural networks, first published in 1994 by Martin Riedmiller. The idea behind it is that the sizes of the partial derivatives might have dangerous effects on the weight updates. It implements an internal adaptive algorithm which focuses only on the signs of the derivatives and completely ignores their sizes. The algorithm computes the size of the weight update by involving an update value which depends on the weights. This value is independent from the size of the gradients.
$latex
\Delta w_{i,j}^{(t)}=\begin{cases}
-\Delta_{i,j}^{(t)} & ,\text{if} \ \frac{\partial E^{(t)}}{\partial w_{i,j}} > 0 \\ +\Delta_{i,j}^{(t)} & ,\text{if} \ \frac{\partial E^{(t)}}{\partial w_{i,j}} < 0 \\ 0 & , \text{otherwise}\end{cases}
&s=-2&bg=ffffff&fg=000000$
Here the ∂E(t)/∂wi,j value is the summarized gradient valid for the whole patterns set and it is obtained from a batch backpropagation. The second step of the RProp algorithm is to determine the Δ(t)i,j update values.
$latex
\Delta_{i,j}^{(t)}=
\begin{cases}
\eta^{+}\cdot\Delta_{i,j}^{(t-1)} & ,\text{if} \ \frac{\partial E^{(t-1)}}{\partial w_{i,j}} \cdot \frac{\partial E^{(t)}}{\partial w_{i,j}} > 0\\
\eta^{-}\cdot\Delta_{i,j}^{(t-1)} & ,\text{if} \ \frac{\partial E^{(t-1)}}{\partial w_{i,j}} \cdot \frac{\partial E^{(t)}}{\partial w_{i,j}} < 0\\
\Delta_{i,j}^{(t-1)} & , \text{otherwise}\end{cases}
&s=-2&bg=ffffff&fg=000000$
When the wi,j weight changes its sign it means that the last update was too big and the algorithm has just escaped from a local minimum. In tihs case the update value Δ(t)i,j is decreased by the value of η− . If the signs of the derivatives following each other are the same Δ(t)i,j is increased to speed up convergence. The values of η− and η+ are constant, several tests proved that the choice of η− = 0.5 and η+ = 1.2 gives very good result for almost all problems [Riedmiller, M. 1994].
The algorithm below shows the adaptation method of the RProp. The first part of RProp is a simple batch backpropagation which has already been discussed before.
//