New XNNS file format to save and load neural networks

I have been reading about the ONNX file format recently that has been created internally by Google, but before I delve myself into protocol buffers I still needed an easily readable (read: debuggable) file format to exchange neural network states. I know that eventually I will have to end up supporting ONNX so I did not put way too much effort into this temporary file format, it is as simple as it can be.

Basically I created a C# class with public members only, which members are perfectly describing the Neural Network in memory. Then I simply serialized this object out to XML format. Here is a sample of a more or less trained neural network:

[code lang=”XML”]
<?xml version="1.0" encoding="utf-8"?>
<SerializableModel xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<InputCount>1</InputCount>
<OutputCount>5</OutputCount>
<CurrentIteration>815</CurrentIteration>
<AlgorithmType>DistributedResilientPropagation</AlgorithmType>
<NormalizationLowerBound>-0.9</NormalizationLowerBound>
<NormalizationUpperBound>0.9</NormalizationUpperBound>
<LearnRate>0.003</LearnRate>
<Momentum>0.6</Momentum>
<L2Lambda>0</L2Lambda>
<UpdateIncrement>1.2</UpdateIncrement>
<UpdateDecrement>0.5</UpdateDecrement>
<UpdateMinimum>1E-06</UpdateMinimum>
<UpdateMaximum>50</UpdateMaximum>
<DataFilePath>C:WorkDeepTrainersqr_sqrt_1500_rows.csv</DataFilePath>
<Layers>
<SerializableLayer>
<ActivationFunction>HyperbolicTangent</ActivationFunction>
<NumberOfNeurons>2</NumberOfNeurons>
<WeightMatrix>
<float>-0.46445936</float>
<float>0.158896089</float>
</WeightMatrix>
<BiasVector>
<float>0.998260736</float>
<float>0.665373</float>
</BiasVector>
</SerializableLayer>
<SerializableLayer>
<ActivationFunction>HyperbolicTangent</ActivationFunction>
<NumberOfNeurons>4</NumberOfNeurons>
<WeightMatrix>
<float>0.8889234</float>
<float>0.446552753</float>
<float>-0.231376171</float>
<float>-0.658875942</float>
<float>0.5600853</float>
<float>0.453765035</float>
<float>-0.7266594</float>
<float>0.105713367</float>
</WeightMatrix>
<BiasVector>
<float>0.81123</float>
<float>0.735411942</float>
<float>0.8298373</float>
<float>0.670553</float>
</BiasVector>
</SerializableLayer>
<SerializableLayer>
<ActivationFunction>HyperbolicTangent</ActivationFunction>
<NumberOfNeurons>6</NumberOfNeurons>
<WeightMatrix>
<float>0.6724005</float>
<float>0.6453012</float>
<float>0.3831551</float>
<float>0.153504372</float>
<float>-0.254765451</float>
<float>-0.417338461</float>
<float>0.6646639</float>
<float>0.0828768</float>
<float>0.823134542</float>
<float>-0.3200357</float>
<float>-0.68506825</float>
<float>0.775844455</float>
<float>-0.8302995</float>
<float>-0.20960474</float>
<float>0.03197223</float>
<float>0.0343325734</float>
<float>-0.411417067</float>
<float>-0.724056244</float>
<float>-0.596864045</float>
<float>0.754843831</float>
<float>-0.05352485</float>
<float>0.379401445</float>
<float>0.7540021</float>
<float>-0.212930083</float>
</WeightMatrix>
<BiasVector>
<float>0.703373432</float>
<float>0.928407431</float>
<float>0.9830695</float>
<float>0.1383705</float>
<float>0.9695166</float>
<float>0.8683977</float>
</BiasVector>
</SerializableLayer>
<SerializableLayer>
<ActivationFunction>HyperbolicTangent</ActivationFunction>
<NumberOfNeurons>5</NumberOfNeurons>
<WeightMatrix>
<float>0.414223433</float>
<float>0.277490735</float>
<float>0.574996233</float>
<float>-0.11370784</float>
<float>-0.450186163</float>
<float>-0.760543168</float>
<float>-0.434872061</float>
<float>-0.7225019</float>
<float>0.8214344</float>
<float>0.648150444</float>
<float>0.159143448</float>
<float>-0.320741773</float>
<float>0.8389858</float>
<float>-0.5750697</float>
<float>-0.1781053</float>
<float>-0.378061831</float>
<float>-0.781340539</float>
<float>-0.0121865273</float>
<float>-0.528134644</float>
<float>-0.449782252</float>
<float>0.7626492</float>
<float>0.835651636</float>
<float>0.045940876</float>
<float>0.127770424</float>
<float>0.18719244</float>
<float>0.280113935</float>
<float>-0.2879598</float>
<float>-0.7027785</float>
<float>-0.5238668</float>
<float>0.158635974</float>
</WeightMatrix>
<BiasVector>
<float>0.4776032</float>
<float>0.896457</float>
<float>0.795412958</float>
<float>0.7235917</float>
<float>0.0705129355</float>
</BiasVector>
</SerializableLayer>
</Layers>
</SerializableModel>
<span data-mce-type="bookmark" id="mce_SELREST_start" data-mce-style="overflow:hidden;line-height:0" style="overflow:hidden;line-height:0" ></span>
[/code]

I think it is quite easy to read what is saved here. This is a neural network with 1 input, 5 outputs, and 2 neurons on the first, 4 on the second and 6 on the third hidden layers. The weight matrices between layers are vectorized in row-major order (this is why you see 1×2=2, 2×4=8, 4×6=24 and in the end 6×5=30 weights), and the biases are added as simple vectors.

Leave a Reply

Your email address will not be published. Required fields are marked *