The Resilient Propagation (RProp) algorithm

The RProp algorithm is a supervised learning method for training multi layered neural networks, first published in 1994 by Martin Riedmiller. The idea behind it is that the sizes of the partial derivatives might have dangerous effects on the weight updates. It implements an internal adaptive algorithm which focuses only on the signs of the […]

Common extensions to Backpropagation

Preconditioning weights The outcome and speed of a learning process is influenced by the initial state of the network. However, it is impossible to tell which condition will be the most ideal. The commonly accepted way is initializing weights by uniformly distributed random numbers on the (0,1) interval. Preconditioning data Very often the training set […]

The Backpropagation algorithm

If there are multiple layers in a neural network the inner layers have neither target values nor errors. This problem remained unsolved until the 1970s when mathemathicians found the backpropagation algorithm to be usable for this particular problem. Backpropagation provides a way to train neural networks with any number of hidden layers. The neurons don’t […]

Feedforward neural networks

I am only covering feedforward neural networks in this project. General features of such networks are: Neurons are grouped into one input layer,  one or more hidden layers and an output layer. Neurons are not connecting to each other within the same layer. Each neuron of a hidden layer connects to all the neurons of […]

Intro

Dear Reader, My aim with this blog is to publish some of the work I did on the field of Artificial Neural Networks. Between 2001 nd 2004 I developed a Neural Network trainer software that was able to distribute workload among multiple networked workstations. I think the idea deserves to be reserved for future reference. In the […]