SLAYER module

class slayerSNN.slayer.spikeLayer(neuronDesc, simulationDesc, fullRefKernel=False)[source]

This class defines the main engine of SLAYER. It provides necessary functions for describing a SNN layer. The input to output connection can be fully-connected, convolutional, or aggregation (pool) It also defines the psp operation and spiking mechanism of a spiking neuron in the layer.

Important: It assumes all the tensors that are being processed are 5 dimensional. (Batch, Channels, Height, Width, Time) or NCHWT format. The user must make sure that an input of correct dimension is supplied.

If the layer does not have spatial dimension, the neurons can be distributed along either Channel, Height or Width dimension where Channel * Height * Width is equal to number of neurons. It is recommended (for speed reasons) to define the neuons in Channels dimension and make Height and Width dimension one.

Arguments:
  • neuronDesc (slayerParams.yamlParams): spiking neuron descriptor.
    neuron:
        type:     SRMALPHA  # neuron type
        theta:    10    # neuron threshold
        tauSr:    10.0  # neuron time constant
        tauRef:   1.0   # neuron refractory time constant
        scaleRef: 2     # neuron refractory response scaling (relative to theta)
        tauRho:   1     # spike function derivative time constant (relative to theta)
        scaleRho: 1     # spike function derivative scale factor
    
  • simulationDesc (slayerParams.yamlParams): simulation descriptor
    simulation:
        Ts: 1.0         # sampling time (ms)
        tSample: 300    # time length of sample (ms)   
    
  • fullRefKernel (bool, optional): high resolution refractory kernel (the user shall not use it in practice)

Usage:

>>> snnLayer = slayer.spikeLayer(neuronDesc, simulationDesc)
conv(inChannels, outChannels, kernelSize, stride=1, padding=0, dilation=1, groups=1, weightScale=100, preHookFx=None)[source]

Returns a function that can be called to apply conv layer mapping to input tensor per time instance. It behaves same as torch.nn.conv2d applied for each time instance.

Arguments:
  • inChannels (int): number of channels in input

  • outChannels (int): number of channls produced by convoluion

  • kernelSize (int or tuple of two ints): size of the convolving kernel

  • stride (int or tuple of two ints): stride of the convolution. Default: 1

  • padding (int or tuple of two ints): zero-padding added to both sides of the input. Default: 0

  • dilation (int or tuple of two ints): spacing between kernel elements. Default: 1

  • groups (int or tuple of two ints): number of blocked connections from input channels to output channels. Default: 1

  • weightScale: sale factor of default initialized weights. Default: 100

  • preHookFx: a function that operates on weight before applying it. Could be used for quantization etc.

The parameters kernelSize, stride, padding, dilation can either be:

  • a single int – in which case the same value is used for the height and width dimension

  • a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension

Usage:

>>> conv = snnLayer.conv(2, 32, 5) # 32C5 flter
>>> output = conv(input)           # must have 2 channels
convTranspose(inChannels, outChannels, kernelSize, stride=1, padding=0, dilation=1, groups=1, weightScale=100, preHookFx=None)[source]

Returns a function that can be called to apply conv layer mapping to input tensor per time instance. It behaves the same as torch.nn.ConvTranspose3d applied for each time instance.

Arguments:
  • inChannels (int): number of channels in input

  • outChannels (int): number of channels produced by transposed convolution

  • kernelSize (int or tuple of two ints): size of ransposed convolution kernel

  • stride (int or tuple of two ints): stride of the transposed convolution. Default: 1

  • padding (int or tuple of two ints): amount of implicit zero-padding added to both sides of the input. Default: 0

  • dilation (int or tuple of two ints): spacing between kernel elements. Default: 1

  • groups (int or tuple of two ints): number of blocked connections from input channels to output channels. Default: 1

  • weightScale : scale factor of default initialized weights. Default: 100

  • preHookFx: a function that operates on weights before applying it. Could be used for quantization etc.

The parameters kernelSize, stride, padding, dilation can either be:

  • a single int – in which case the same value is used for the height and width dimension

  • a tuple of two ints – in which case, the first int is used for the height dimension, and the second is used for the width dimension

Usage:

>>> convT = snnLayer.convTranspose(32, 2, 5) # 2T5 flter, the opposite of 32C5 filter
>>> output = convT(input)
delay(inputSize)[source]

Returns a function that can be called to apply delay opeartion in time dimension of the input tensor. The delay parameter is available as delay.delay and is initialized uniformly between 0ms and 1ms. The delay parameter is stored as float values, however, it is floored during actual delay applicaiton internally. The delay values are not clamped to zero. To maintain the causality of the network, one should clamp the delay values explicitly to ensure positive delays.

Arguments:
  • inputSize (int or tuple of three ints): spatial shape of the input signal in CHW format (Channel, Height, Width). If integer value is supplied, it refers to the number of neurons in channel dimension. Heighe and Width are assumed to be 1.

Usage:

>>> delay = snnLayer.delay((C, H, W))
>>> delayedSignal = delay(input)

Always clamp the delay after optimizer.step().

>>> optimizer.step()
>>> delay.delay.data.clamp_(0)  
delayShift(input, delay, Ts=1)[source]

Applies delay in time dimension (assumed to be the last dimension of the tensor) of the input tensor. The autograd backward link is established as well.

Arguments:
  • input: input Torch tensor.

  • delay (float or Torch tensor): amount of delay to apply. Same delay is applied to all the inputs if delay is float or Torch tensor of size 1. If the Torch tensor has size more than 1, its dimension must match the dimension of input tensor except the last dimension.

  • Ts: sampling time of the delay. Default is 1.

Usage:

>>> delayedInput = slayer.delayShift(input, 5)
dense(inFeatures, outFeatures, weightScale=10, preHookFx=None)[source]

Returns a function that can be called to apply dense layer mapping to input tensor per time instance. It behaves similar to torch.nn.Linear applied for each time instance.

Arguments:
  • inFeatures (int, tuple of two ints, tuple of three ints): dimension of input features (Width, Height, Channel) that represents the number of input neurons.

  • outFeatures (int): number of output neurons.

  • weightScale: sale factor of default initialized weights. Default: 10

  • preHookFx: a function that operates on weight before applying it. Could be used for quantization etc.

Usage:

>>> fcl = snnLayer.dense(2048, 512)          # takes (N, 2048, 1, 1, T) tensor
>>> fcl = snnLayer.dense((128, 128, 2), 512) # takes (N, 2, 128, 128, T) tensor
>>> output = fcl(input)                      # output will be (N, 512, 1, 1, T) tensor
dropout(p=0.5, inplace=False)[source]

Returns a function that can be called to apply dropout layer to the input tensor. It behaves similar to torch.nn.Dropout. However, dropout over time dimension is preserved, i.e. if a neuron is dropped, it remains dropped for entire time duration.

Arguments:
  • p: dropout probability.

  • inplace (bool): inplace opeartion flag.

Usage:

>>> drop = snnLayer.dropout(0.2)
>>> output = drop(input)
pool(kernelSize, stride=None, padding=0, dilation=1, preHookFx=None)[source]

Returns a function that can be called to apply pool layer mapping to input tensor per time instance. It behaves same as torch.nn.:sum pooling applied for each time instance.

Arguments:
  • kernelSize (int or tuple of two ints): the size of the window to pool over

  • stride (int or tuple of two ints): stride of the window. Default: kernelSize

  • padding (int or tuple of two ints): implicit zero padding to be added on both sides. Default: 0

  • dilation (int or tuple of two ints): a parameter that controls the stride of elements in the window. Default: 1

  • preHookFx: a function that operates on weight before applying it. Could be used for quantization etc.

The parameters kernelSize, stride, padding, dilation can either be:

  • a single int – in which case the same value is used for the height and width dimension

  • a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension

Usage:

>>> pool = snnLayer.pool(4) # 4x4 pooling
>>> output = pool(input)
psp(spike)[source]

Applies psp filtering to spikes. The output tensor dimension is same as input.

Arguments:
  • spike: input spike tensor.

Usage:

>>> filteredSpike = snnLayer.psp(spike)
pspFilter(nFilter, filterLength, filterScale=1)[source]

Returns a function that can be called to apply a bank of temporal filters. The output tensor is of same dimension as input except the channel dimension is scaled by number of filters. The initial filters are initialized using default PyTorch initializaion for conv layer. The filter banks are learnable. NOTE: the learned psp filter must be reversed because PyTorch performs conrrelation operation.

Arguments:
  • nFilter: number of filters in the filterbank.

  • filterLength: length of filter in number of time bins.

  • filterScale: initial scaling factor for filter banks. Default: 1.

Usage:

>>> pspFilter = snnLayer.pspFilter()
>>> filteredSpike = pspFilter(spike)
pspLayer()[source]

Returns a function that can be called to apply psp filtering to spikes. The output tensor dimension is same as input. The initial psp filter corresponds to the neuron psp filter. The psp filter is learnable. NOTE: the learned psp filter must be reversed because PyTorch performs conrrelation operation.

Usage:

>>> pspLayer = snnLayer.pspLayer()
>>> filteredSpike = pspLayer(spike)
spike(membranePotential)[source]

Applies spike function and refractory response. The output tensor dimension is same as input. membranePotential will reflect spike and refractory behaviour as well.

Arguments:
  • membranePotential: subthreshold membrane potential.

Usage:

>>> outSpike = snnLayer.spike(membranePotential)
unpool(kernelSize, stride=None, padding=0, dilation=1, preHookFx=None)[source]

Returns a function that can be called to apply unpool layer mapping to input tensor per time instance. It behaves same as torch.nn. unpool layers.

Arguments:
  • kernelSize (int or tuple of two ints): the size of the window to unpool over

  • stride (int or tuple of two ints): stride of the window. Default: kernelSize

  • padding (int or tuple of two ints): implicit zero padding to be added on both sides. Default: 0

  • dilation (int or tuple of two ints): a parameter that controls the stride of elements in the window. Default: 1

  • preHookFx: a function that operates on weight before applying it. Could be used for quantization etc.

The parameters kernelSize, stride, padding, dialtion can either be:

  • a single int – in which case the same value is used for the height and width dimension

  • a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension

Usage:

>>> unpool = snnLayer.unpool(2) # 2x2 unpooling
>>> output = unpool(input)