This block models a two layer Neural Network, with one recurrent layer.
A recurrent Neural Network is composed at least two NeuralNetworkLayer (HiddenLayer_... and OutputLayer_...). Everyone is specified by the following parameters:
- numNeurons: it specifies the number of neurons which compose the layer (it is also equal to the rows numer of the weight and bias matrix and to the number of outputs of the layer;
- numInputs: it specifies the number of inputs of the layer (it is also equal to the columns numer of the weight matrix;
- weightTable: it is the weight table of the layer ([Number of Neurons x Number of Inputs]);
- biasTable: it is the bias table of the layer ([Number of Neurons x 1]);
- NeuronActivationFunction: it is the activation function of the layer; it can be equal to:
- NeuralNetwork.Types.ActivationFunction.TanSig = Hyperbolic tangent sigmoid activation function
- NeuralNetwork.Types.ActivationFunction.LogSig = Logarithmic sigmoid activation function
- NeuralNetwork.Types.ActivationFunction.RadBas = Radial basis activation function
- NeuralNetwork.Types.ActivationFunction.PureLin = Linear activation function
The network is called "recurrent" because usually the output of the hidden layers is used as an input of the same layer: obviously the output has to delayed. In this case, this is done using the block NeuralNetwork.Utilities.UnitDelayMIMO. The samplePeriod of this block is a parameter of the network.
The model is made so that the number of inputs to the recurrent layer has not to considered the recurrent inputs. For example, if the layer has 1 non-recurrent input and 5 neurons then the number of all inputs to the layer will be 6, but number of inputs which has to be inserted has to be equal to 1.
To get the weight and bias table as modelica wants two different ways can be used:
- using the extractData.m MatLab script, located in Utilities folder;
- using the DataFiles Dymola library.
Release Notes:
Generated at 2024-04-19T18:16:02Z
by OpenModelicaOpenModelica 1.22.3 using GenerateDoc.mos