Calculating dot product using matrix partitioning

Matrices have a beautiful feature that comes very handy when creating fast dot product algorithms. If a matrix is partitioned to equal size blocks, and the blocks themselves are treated as matrix cells in a new matrix, then the dot product calculation will be the same with the new matrix. In the DeepTrainer project I have […]

Artificial Intelligence Fight II. – introducing parallel processing

Dear Reader I have been working on a multithreaded implementation of the Backpropagation algorithm. The most computationally intensive parts of a learning iteration are the forward-propagation and the backpropagation steps which are used in all algorithms to determine the gradients. The main difference between these algorithms is how these gradients are used to update the […]

C++11 DLL library for the Matrix-RProp algorithm

Dear Reader, This project is currently in progress but I thought I would publish it anyway. I have created a modern C++ DLL from the code extracted from the Borland C++ Builder project, as my 15 year old code is hardly going to be useful for anyone these days. https://github.com/bulyaki/NNTrainerLib What this library currently does […]