horde-ad-0.2.0.0: Higher Order Reverse Derivatives Efficiently - Automatic Differentiation
Safe HaskellNone
LanguageGHC2024

MnistFcnnRanked1

Description

Implementation of fully connected neutral network for classification of MNIST digits with sized lists of rank 1 tensors (vectors) as the trainable parameters. Sports 2 hidden layers. No mini-batches. This is an exotic and fundamentally inefficient way of implementing nns, but it's valuable for comparative benchmarking.

Synopsis

Documentation

type ADFcnnMnist1Parameters (target :: Target) (widthHidden :: Nat) (widthHidden2 :: Nat) r = ((ListR widthHidden (target (TKS '[SizeMnistGlyph] r)), target (TKS '[widthHidden] r)), (ListR widthHidden2 (target (TKS '[widthHidden] Float)), target (TKS '[widthHidden2] r)), (ListR SizeMnistLabel (target (TKS '[widthHidden2] r)), target (TKS '[SizeMnistLabel] r))) Source #

The differentiable type of all trainable parameters of this nn.

listMatmul1 :: forall target r (w1 :: Nat) (w2 :: Nat). (ADReady target, GoodScalar r, KnownNat w1) => target (TKS '[w1] r) -> ListR w2 (target (TKS '[w1] r)) -> target (TKS '[w2] r) Source #

An ad-hoc matrix multiplication analogue for matrices represented as lists of vectors.

afcnnMnist1 :: forall target r (widthHidden :: Nat) (widthHidden2 :: Nat). (ADReady target, GoodScalar r, Differentiable r) => (forall (n :: Nat). KnownNat n => target (TKS '[n] r) -> target (TKS '[n] r)) -> (target (TKS '[SizeMnistLabel] r) -> target (TKS '[SizeMnistLabel] r)) -> SNat widthHidden -> SNat widthHidden2 -> target (TKS '[SizeMnistGlyph] r) -> ADFcnnMnist1Parameters target widthHidden widthHidden2 r -> target (TKR 1 r) Source #

Fully connected neural network for the MNIST digit classification task. There are two hidden layers and both use the same activation function. The output layer uses a different activation function. The widths of the two hidden layers are widthHidden and widthHidden2, respectively.

afcnnMnistLoss1 :: forall target r (widthHidden :: Nat) (widthHidden2 :: Nat). (ADReady target, GoodScalar r, Differentiable r) => SNat widthHidden -> SNat widthHidden2 -> (target (TKR 1 r), target (TKR 1 r)) -> ADFcnnMnist1Parameters target widthHidden widthHidden2 r -> target ('TKScalar r) Source #

The neural network applied to concrete activation functions and composed with the appropriate loss function.

afcnnMnistTest1 :: forall target (widthHidden :: Nat) (widthHidden2 :: Nat) r. (target ~ Concrete, GoodScalar r, Differentiable r) => SNat widthHidden -> SNat widthHidden2 -> [MnistDataLinearR r] -> ADFcnnMnist1Parameters target widthHidden widthHidden2 r -> r Source #

A function testing the neural network given testing set of inputs and the trained parameters.