8.16.0.4
14 Layer functions
These are functions are provided by Malt that can be used either as individual target functions (see Loss Functions) or may be combined with each other to produce new target functions. They are collectively called layer functions since some of these functions are used to construct deep neural networks using layers.
14.1 Single layer functions
These functions accept an input tensor, a θ and return an output tensor. These functions are all of the type target-fn? (see Loss Functions).
Implements the linear combination given by
(+ (* (ref θ 0) t) (ref θ 1))
Implements the polynomial combination given by
(+ (* (ref θ 0) (sqr t)) (* (ref θ 1) t) (ref θ 2))
Implements the polynomial combination given by
(+ (dot-product (ref θ 0) t) (ref θ 1))
Implements the softmax function given by
(let ((z (- t (max t)))) (let ((expz (exp z))) (/ expz (sum expz))))
Implements the ReLU function given by
(rectify ((linear t) theta))
Implements the biased correlation function given by
(+ (correlate (ref θ 0) t) (ref θ 1))
Implements the rectified 1D-convolution function given by
(rectify ((corr t) theta))
Implements the averaging of the rank 1 elements of a tensor given by
(let ((num-segments (ref (refr (shape t) (- (rank t) 2)) 0))) (/ (sum-cols t) num-segments))
14.2 Deep layer functions
These functions create a stacked composition of layer functions by providing the depth k of composition. The composition is also a layer function.
Implements a composition of k ReLU functions. Can be used to
implement a neural network exclusively made up of dense layers.
Implements a composition of k rectified 1D-convolution (recu) functions . Can be used to
implement a neural network exclusively made up of recu layers.