| Safe Haskell | Safe-Inferred |
|---|---|
| Language | Haskell2010 |
Synapse.NN.Layers.Layer
Description
This module provides nessesary abstraction over layers of neural networks.
AbstractLayer typeclass defines interface of all layers of neural network model.
Its implementation is probably the most low-leveled abstraction of the Synapse library.
Notes on how to correctly implement that typeclass are in the docs for it.
Layer is the existential datatype that wraps any AbstractLayer instance.
That is the building block of any neural network.
Synopsis
- class AbstractLayer l where
- inputSize :: l a -> Maybe Int
- outputSize :: l a -> Maybe Int
- nParameters :: l a -> Int
- getParameters :: SymbolIdentifier -> l a -> [SymbolMat a]
- updateParameters :: l a -> [Mat a] -> l a
- symbolicForward :: (Symbolic a, Floating a, Ord a) => SymbolIdentifier -> SymbolMat a -> l a -> (SymbolMat a, SymbolMat a)
- forward :: (AbstractLayer l, Symbolic a, Floating a, Ord a) => Mat a -> l a -> Mat a
- data Layer a = forall l.AbstractLayer l => Layer (l a)
- type LayerConfiguration l = Int -> l
AbstractLayer typeclass
class AbstractLayer l where Source #
AbstractLayer typeclass defines basic interface of all layers of neural network model.
Every layer should be able to pass SymbolMat through itself to produce new SymbolMat
(make prediction based on its parameters) using symbolicForward function,
which allows for gradients to be calculated after predictions, which in turn makes training possible.
nParameters, getParameters and updateParameters functions allow training of parameters of the layer.
Their implementations should match - that is getParameters function should return list of length nParameters
and updateParameters should expect a list of the same length with the matrices in the same order as were they in getParameters.
Synapse manages gradients and parameters for layers with erased type information using prefix system.
SymbolIdentifier is a prefix for name of symbolic parameters that are used in calculation.
Every used parameter should have unique name to be recognised by the autograd -
it must start with given prefix and end with the numerical index of said parameter.
For example 3rd layer with 2 parameters (weights and bias) should
name its weights symbol "ml3w1" and name its bias symbol "ml3w2" ("ml3w" prefix will be supplied).
Important: this typeclass correct implementation is very important (as it is the 'heart' of Synapse library) for work of the neural network and training, read the docs thoroughly to ensure that all the invariants are met.
Methods
inputSize :: l a -> Maybe Int Source #
Returns the size of the input that is supported for forward and symbolicForward functions. Nothing means size independence (activation functions are the example).
outputSize :: l a -> Maybe Int Source #
Returns the size of the output that is supported for forward and symbolicForward functions. Nothing means size independence (activation functions are the example).
nParameters :: l a -> Int Source #
Returns the number of parameters of this layer.
getParameters :: SymbolIdentifier -> l a -> [SymbolMat a] Source #
Returns a list of all parameters (those must be of the exact same order as they are named by the layer (check symbolicForward docs)).
updateParameters :: l a -> [Mat a] -> l a Source #
Updates parameters based on supplied list (length of that list, the order and the form of parameters is EXACTLY the same as those from getParameters)
symbolicForward :: (Symbolic a, Floating a, Ord a) => SymbolIdentifier -> SymbolMat a -> l a -> (SymbolMat a, SymbolMat a) Source #
Passes symbolic matrix through to produce new symbolic matrix, while retaining gradients graph. Second matrix is a result of application of regularizer on a layer.
Instances
forward :: (AbstractLayer l, Symbolic a, Floating a, Ord a) => Mat a -> l a -> Mat a Source #
Passes matrix through to produce new matrix.
Layer existential datatype
Layer existential datatype wraps anything that implements AbstractLayer.
Constructors
| forall l.AbstractLayer l => Layer (l a) |
Instances
| AbstractLayer Layer Source # | |
Defined in Synapse.NN.Layers.Layer Methods inputSize :: Layer a -> Maybe Int Source # outputSize :: Layer a -> Maybe Int Source # nParameters :: Layer a -> Int Source # getParameters :: SymbolIdentifier -> Layer a -> [SymbolMat a] Source # updateParameters :: Layer a -> [Mat a] -> Layer a Source # symbolicForward :: (Symbolic a, Floating a, Ord a) => SymbolIdentifier -> SymbolMat a -> Layer a -> (SymbolMat a, SymbolMat a) Source # | |
| type DType (Layer a) Source # | |
Defined in Synapse.NN.Layers.Layer | |
LayerConfiguration type alias
type LayerConfiguration l Source #
Arguments
| = Int | Output size of previous layer. |
| -> l | Resulting layer. |
LayerConfiguration type alias represents functions that are able to build layers.