Why DeepTrainer?

Recently I’ve been reading quite a lot about activation functions and Neural Networks in general and I think I found a good answer to a question that has been bugging me (and others who know what I am working on) ever since I started working on my own deep learning framework. I’ve had conversations with […]

Artificial Intelligence Fight V. – Playing with activation functions, introducing CUDA C/C++, and thoughts about SGI, Nvidia and Intel.

Positive results My marketing department that’s just around in the bedroom (where dreams come t̶r̶u̶e̶  and go) have been bugging me to continue the AI Fight sequel so here it is. When I reach #XVI someone please warn me diplomatically to stop otherwise it will gain consciousness and start its own Netflix pilot. There is […]

The WPF Test Harness application

I think it is time for me to provide some explanation about the test harness built around the DeepTrainer library. DeepTrainer in itself is only a C++ library with a .NET wrapper, and these test harness applications are demonstrating their usage. I’ve had various enquiries about the library recently, so I thought it is much […]

Calculating dot product using matrix partitioning

Matrices have a beautiful feature that comes very handy when creating fast dot product algorithms. If a matrix is partitioned to equal size blocks, and the blocks themselves are treated as matrix cells in a new matrix, then the dot product calculation will be the same with the new matrix. In the DeepTrainer project I have […]