Artificial Intelligence Fight V. – Playing with activation functions, introducing CUDA C/C++, and thoughts about SGI, Nvidia and Intel.

Positive results My marketing department that’s just around in the bedroom (where dreams come t̶r̶u̶e̶  and go) have been bugging me to continue the AI Fight sequel so here it is. When I reach #XVI someone please warn me diplomatically to stop otherwise it will gain consciousness and start its own Netflix pilot. There is […]

The WPF Test Harness application

I think it is time for me to provide some explanation about the test harness built around the DeepTrainer library. DeepTrainer in itself is only a C++ library with a .NET wrapper, and these test harness applications are demonstrating their usage. I’ve had various enquiries about the library recently, so I thought it is much […]

Calculating dot product using matrix partitioning

Matrices have a beautiful feature that comes very handy when creating fast dot product algorithms. If a matrix is partitioned to equal size blocks, and the blocks themselves are treated as matrix cells in a new matrix, then the dot product calculation will be the same with the new matrix. In the DeepTrainer project I have […]

Artificial Intelligence Fight II. – introducing parallel processing

Dear Reader I have been working on a multithreaded implementation of the Backpropagation algorithm. The most computationally intensive parts of a learning iteration are the forward-propagation and the backpropagation steps which are used in all algorithms to determine the gradients. The main difference between these algorithms is how these gradients are used to update the […]