horde-ad-0.2.0.0: Higher Order Reverse Derivatives Efficiently - Automatic Differentiation
Safe HaskellNone
LanguageGHC2024

MnistRnnShaped2

Description

Shaped tensor-based implementation of Recurrent Neural Network for classification of MNIST digits. Sports 2 hidden layers.

Synopsis

Documentation

type ADRnnMnistParametersShaped (target :: Target) (sizeMnistHeight :: Nat) (width :: Nat) r = (LayerWeigthsRNNShaped target sizeMnistHeight width r, LayerWeigthsRNNShaped target width width r, (target (TKS '[SizeMnistLabel, width] r), target (TKS '[SizeMnistLabel] r))) Source #

The differentiable type of all trainable parameters of this nn. Shaped version, statically checking all dimension widths.

type LayerWeigthsRNNShaped (target :: Target) (in_width :: Nat) (out_width :: Nat) r = (target (TKS '[out_width, in_width] r), target (TKS '[out_width, out_width] r), target (TKS '[out_width] r)) Source #

zeroStateS :: forall target (sh :: [Nat]) r a. (BaseTensor target, KnownShS sh, GoodScalar r) => (target (TKS sh r) -> a) -> a Source #

unrollLastS :: forall target state c w r (n :: Nat) (sh :: [Nat]). (BaseTensor target, KnownNat n, KnownShS sh, GoodScalar r) => (state -> target (TKS sh r) -> w -> (c, state)) -> state -> target (TKS (n ': sh) r) -> w -> (c, state) Source #

rnnMnistLayerS Source #

Arguments

:: forall target r (in_width :: Nat) (out_width :: Nat) (batch_size :: Nat). (ADReady target, GoodScalar r, Differentiable r) 
=> SNat in_width 
-> SNat out_width 
-> SNat batch_size

these boilerplate lines tie type parameters to the corresponding value parameters (SNat below) denoting basic dimensions

-> target (TKS '[out_width, batch_size] r) 
-> target (TKS '[in_width, batch_size] r) 
-> LayerWeigthsRNNShaped target in_width out_width r 
-> target (TKS '[out_width, batch_size] r) 

A single recurrent layer with tanh activation function.

rnnMnistTwoS :: forall target r (out_width :: Nat) (batch_size :: Nat) (sizeMnistH :: Nat). (ADReady target, GoodScalar r, Differentiable r) => SNat out_width -> SNat batch_size -> SNat sizeMnistH -> target (TKS '[2 * out_width, batch_size] r) -> PrimalOf target (TKS '[sizeMnistH, batch_size] r) -> (LayerWeigthsRNNShaped target sizeMnistH out_width r, LayerWeigthsRNNShaped target out_width out_width r) -> (target (TKS '[out_width, batch_size] r), target (TKS '[2 * out_width, batch_size] r)) Source #

Composition of two recurrent layers.

rnnMnistZeroS :: forall target r (out_width :: Nat) (batch_size :: Nat) (sizeMnistH :: Nat) (sizeMnistW :: Nat). (ADReady target, GoodScalar r, Differentiable r) => SNat out_width -> SNat batch_size -> SNat sizeMnistH -> SNat sizeMnistW -> PrimalOf target (TKS '[sizeMnistW, sizeMnistH, batch_size] r) -> ADRnnMnistParametersShaped target sizeMnistH out_width r -> target (TKS '[SizeMnistLabel, batch_size] r) Source #

The two-layer recurrent nn with its state initialized to zero and the result composed with a fully connected layer.

rnnMnistLossFusedS :: forall target (h :: Nat) (w :: Nat) (out_width :: Nat) (batch_size :: Nat) r. (h ~ SizeMnistHeight, w ~ SizeMnistWidth, Differentiable r, ADReady target, ADReady (PrimalOf target), GoodScalar r) => SNat out_width -> SNat batch_size -> (PrimalOf target (TKS '[batch_size, h, w] r), PrimalOf target (TKS '[batch_size, SizeMnistLabel] r)) -> ADRnnMnistParametersShaped target h out_width r -> target ('TKScalar r) Source #

The neural network composed with the SoftMax-CrossEntropy loss function.

rnnMnistTestS :: forall target (h :: Nat) (w :: Nat) (out_width :: Nat) (batch_size :: Nat) r. (h ~ SizeMnistHeight, w ~ SizeMnistWidth, target ~ Concrete, Differentiable r, GoodScalar r) => SNat out_width -> SNat batch_size -> MnistDataBatchS batch_size r -> ADRnnMnistParametersShaped target h out_width r -> r Source #

A function testing the neural network given testing set of inputs and the trained parameters.