Safe Haskell | None |
---|---|
Language | GHC2024 |
MnistCnnRanked2
Description
Ranked tensor-based implementation of Convolutional Neural Network for classification of MNIST digits. Sports 2 hidden layers.
With the current CPU backend it's slow enough that it's hard to see if it trains.
Synopsis
- type ADCnnMnistParametersShaped (target :: Target) (h :: Natural) (w :: Natural) (kh :: Natural) (kw :: Natural) (c_out :: Natural) (n_hidden :: Natural) r = ((target (TKS '[c_out, 1, kh + 1, kw + 1] r), target (TKS '[c_out] r)), (target (TKS '[c_out, c_out, kh + 1, kw + 1] r), target (TKS '[c_out] r)), (target (TKS '[n_hidden, (c_out * Div h 4) * Div w 4] r), target (TKS '[n_hidden] r)), (target (TKS '[SizeMnistLabel, n_hidden] r), target (TKS '[SizeMnistLabel] r)))
- type ADCnnMnistParameters (target :: Target) r = ((target (TKR 4 r), target (TKR 1 r)), (target (TKR 4 r), target (TKR 1 r)), (target (TKR 2 r), target (TKR 1 r)), (target (TKR 2 r), target (TKR 1 r)))
- convMnistLayerR :: (ADReady target, GoodScalar r, Differentiable r) => target (TKR 4 r) -> target (TKR 4 r) -> target (TKR 1 r) -> target (TKR 4 r)
- convMnistTwoR :: (ADReady target, GoodScalar r, Differentiable r) => Int -> Int -> Int -> PrimalOf target (TKR 4 r) -> ADCnnMnistParameters target r -> target (TKR 2 r)
- convMnistLossFusedR :: (ADReady target, ADReady (PrimalOf target), GoodScalar r, Differentiable r) => Int -> (PrimalOf target (TKR 3 r), PrimalOf target (TKR 2 r)) -> ADCnnMnistParameters target r -> target ('TKScalar r)
- convMnistTestR :: forall (target :: TK -> Type) r. (target ~ Concrete, GoodScalar r, Differentiable r) => Int -> MnistDataBatchR r -> ADCnnMnistParameters Concrete r -> r
Documentation
type ADCnnMnistParametersShaped (target :: Target) (h :: Natural) (w :: Natural) (kh :: Natural) (kw :: Natural) (c_out :: Natural) (n_hidden :: Natural) r = ((target (TKS '[c_out, 1, kh + 1, kw + 1] r), target (TKS '[c_out] r)), (target (TKS '[c_out, c_out, kh + 1, kw + 1] r), target (TKS '[c_out] r)), (target (TKS '[n_hidden, (c_out * Div h 4) * Div w 4] r), target (TKS '[n_hidden] r)), (target (TKS '[SizeMnistLabel, n_hidden] r), target (TKS '[SizeMnistLabel] r))) Source #
The differentiable type of all trainable parameters of this nn. Shaped version, statically checking all dimension widths.
Due to subtraction complicating posititive number type inference,
kh
denotes kernel height minus one and analogously kw
is kernel
width minus one.
type ADCnnMnistParameters (target :: Target) r = ((target (TKR 4 r), target (TKR 1 r)), (target (TKR 4 r), target (TKR 1 r)), (target (TKR 2 r), target (TKR 1 r)), (target (TKR 2 r), target (TKR 1 r))) Source #
The differentiable type of all trainable parameters of this nn. Ranked version.
Arguments
:: (ADReady target, GoodScalar r, Differentiable r) | |
=> target (TKR 4 r) | [c_out, c_in, kh + 1, kw + 1] |
-> target (TKR 4 r) | [batch_size, c_in, h, w] |
-> target (TKR 1 r) | [c_out] |
-> target (TKR 4 r) | [batch_size, c_out, h `Div` 2, w `Div` 2] |
A single convolutional layer with relu
and maxPool
.
Arguments
:: (ADReady target, GoodScalar r, Differentiable r) | |
=> Int | |
-> Int | |
-> Int | |
-> PrimalOf target (TKR 4 r) | input images |
-> ADCnnMnistParameters target r | parameters |
-> target (TKR 2 r) | output classification |
Composition of two convolutional layers.
Arguments
:: (ADReady target, ADReady (PrimalOf target), GoodScalar r, Differentiable r) | |
=> Int | batch_size |
-> (PrimalOf target (TKR 3 r), PrimalOf target (TKR 2 r)) | [batch_size, SizeMnistLabel] |
-> ADCnnMnistParameters target r | |
-> target ('TKScalar r) |
The neural network composed with the SoftMax-CrossEntropy loss function.
convMnistTestR :: forall (target :: TK -> Type) r. (target ~ Concrete, GoodScalar r, Differentiable r) => Int -> MnistDataBatchR r -> ADCnnMnistParameters Concrete r -> r Source #
A function testing the neural network given testing set of inputs and the trained parameters.