horde-ad-0.2.0.0: Higher Order Reverse Derivatives Efficiently - Automatic Differentiation
Safe HaskellNone
LanguageGHC2024

MnistCnnRanked2

Description

Ranked tensor-based implementation of Convolutional Neural Network for classification of MNIST digits. Sports 2 hidden layers.

With the current CPU backend it's slow enough that it's hard to see if it trains.

Synopsis

Documentation

type ADCnnMnistParametersShaped (target :: Target) (h :: Natural) (w :: Natural) (kh :: Natural) (kw :: Natural) (c_out :: Natural) (n_hidden :: Natural) r = ((target (TKS '[c_out, 1, kh + 1, kw + 1] r), target (TKS '[c_out] r)), (target (TKS '[c_out, c_out, kh + 1, kw + 1] r), target (TKS '[c_out] r)), (target (TKS '[n_hidden, (c_out * Div h 4) * Div w 4] r), target (TKS '[n_hidden] r)), (target (TKS '[SizeMnistLabel, n_hidden] r), target (TKS '[SizeMnistLabel] r))) Source #

The differentiable type of all trainable parameters of this nn. Shaped version, statically checking all dimension widths.

Due to subtraction complicating posititive number type inference, kh denotes kernel height minus one and analogously kw is kernel width minus one.

type ADCnnMnistParameters (target :: Target) r = ((target (TKR 4 r), target (TKR 1 r)), (target (TKR 4 r), target (TKR 1 r)), (target (TKR 2 r), target (TKR 1 r)), (target (TKR 2 r), target (TKR 1 r))) Source #

The differentiable type of all trainable parameters of this nn. Ranked version.

convMnistLayerR Source #

Arguments

:: (ADReady target, GoodScalar r, Differentiable r) 
=> target (TKR 4 r)
[c_out, c_in, kh + 1, kw + 1]
-> target (TKR 4 r)
[batch_size, c_in, h, w]
-> target (TKR 1 r)
[c_out]
-> target (TKR 4 r)
[batch_size, c_out, h `Div` 2, w `Div` 2]

A single convolutional layer with relu and maxPool.

convMnistTwoR Source #

Arguments

:: (ADReady target, GoodScalar r, Differentiable r) 
=> Int 
-> Int 
-> Int 
-> PrimalOf target (TKR 4 r)

input images [batch_size, 1, SizeMnistHeight, SizeMnistWidth]

-> ADCnnMnistParameters target r

parameters

-> target (TKR 2 r)

output classification [SizeMnistLabel, batch_size]

Composition of two convolutional layers.

convMnistLossFusedR Source #

Arguments

:: (ADReady target, ADReady (PrimalOf target), GoodScalar r, Differentiable r) 
=> Int

batch_size

-> (PrimalOf target (TKR 3 r), PrimalOf target (TKR 2 r))
[batch_size, SizeMnistLabel]
-> ADCnnMnistParameters target r 
-> target ('TKScalar r) 

The neural network composed with the SoftMax-CrossEntropy loss function.

convMnistTestR :: forall (target :: TK -> Type) r. (target ~ Concrete, GoodScalar r, Differentiable r) => Int -> MnistDataBatchR r -> ADCnnMnistParameters Concrete r -> r Source #

A function testing the neural network given testing set of inputs and the trained parameters.