SLAYER Loihi module

class slayerSNN.slayerLoihi.spikeLayer(neuronDesc, simulationDesc)[source]

This class defines the main engine of SLAYER Loihi module. It is derived from slayer.spikeLayer with Loihi specific implementation for neuron model, weight quantization. All of the routines available for slayer.spikeLayer are applicable.

Arguments:
  • neuronDesc (slayerParams.yamlParams): spiking neuron descriptor.
    neuron:
        type:     LOIHI # neuron type
        vThMant:  80    # neuron threshold mantessa
        vDecay:   128   # compartment voltage decay
        iDecay:   1024  # compartment current decay
        refDelay: 1     # refractory delay
        wgtExp:   0     # weight exponent
        tauRho:   1     # spike function derivative time constant (relative to theta)
        scaleRho: 1     # spike function derivative scale factor
    
  • simulationDesc (slayerParams.yamlParams): simulation descriptor
    simulation:
        Ts: 1.0         # sampling time (ms)
        tSample: 300    # time length of sample (ms)
    

Usage:

>>> snnLayer = slayerLoihi.spikeLayer(neuronDesc, simulationDesc)
conv(inChannels, outChannels, kernelSize, stride=1, padding=0, dilation=1, groups=1, weightScale=100, preHookFx=<function spikeLayer.<lambda>>)[source]

This function behaves similar to slayer.spikeLayer.conv(). The only difference is that the weights are qunatized with step of 2 (as is the case for signed weights in Loihi). One can, however, skip the quantization step altogether as well.

Arguments:

The arguments that are different from slayer.spikeLayer.conv() are listed.

  • weightScale: sale factor of default initialized weights. Default: 100

  • preHookFx: a function that operates on weight before applying it. Could be used for quantization etc. Default: quantizes in step of 2.

Usage:

Same as slayer.spikeLayer.conv()

convTranspose(inChannels, outChannels, kernelSize, stride=1, padding=0, dilation=1, groups=1, weightScale=100, preHookFx=<function spikeLayer.<lambda>>)[source]

This function behaves similar to slayer.spikeLayer.convTranspose(). The only difference is that the weights are qunatized with step of 2 (as is the case for signed weights in Loihi). One can, however, skip the quantization step altogether as well.

Arguments:

The arguments that are different from slayer.spikeLayer.conv() are listed.

  • weightScale: sale factor of default initialized weights. Default: 100

  • preHookFx: a function that operates on weight before applying it. Could be used for quantization etc. Default: quantizes in step of 2.

Usage:

Same as slayer.spikeLayer.convTranspose()

dense(inFeatures, outFeatures, weightScale=100, preHookFx=<function spikeLayer.<lambda>>)[source]

This function behaves similar to slayer.spikeLayer.dense(). The only difference is that the weights are qunatized with step of 2 (as is the case for signed weights in Loihi). One can, however, skip the quantization step altogether as well.

Arguments:

The arguments that are different from slayer.spikeLayer.dense() are listed.

  • weightScale: sale factor of default initialized weights. Default: 100

  • preHookFx: a function that operates on weight before applying it. Could be used for quantization etc. Default: quantizes in step of 2.

Usage:

Same as slayer.spikeLayer.dense()

pool(kernelSize, stride=None, padding=0, dilation=1, preHookFx=None)[source]

This function behaves similar to slayer.spikeLayer.pool(). The only difference is that the weights are qunatized with step of 2 (as is the case for signed weights in Loihi). One can, however, skip the quantization step altogether as well.

Arguments:

The arguments set is same as slayer.spikeLayer.pool().

Usage:

Same as slayer.spikeLayer.pool()

spikeLoihi(weightedSpikes)[source]

Applies Loihi neuron dynamics to weighted spike inputs and returns output spike tensor. The output tensor dimension is same as input.

NOTE: This function is different than the default spike function which takes membrane potential (weighted spikes with psp filter applied). Since the dynamics is modeled internally, it just takes in weightedSpikes (NOT FILTERED WITH PSP) for accurate Loihi neuron simulation.

Arguments:
  • weightedSpikes: input spikes weighted by their corresponding synaptic weights.

Usage:

>>> outSpike = snnLayer.spikeLoihi(weightedSpikes)
spikeLoihiFull(weightedSpikes)[source]

Applies Loihi neuron dynamics to weighted spike inputs and returns output spike, voltage and current. The output tensor dimension is same as input.

NOTE: This function does not have autograd routine in the computational graph.

Arguments:
  • weightedSpikes: input spikes weighted by their corresponding synaptic weights.

Usage:

>>> outSpike, outVoltage, outCurrent = snnLayer.spikeLoihiFull(weightedSpikes)
unpool(kernelSize, stride=None, padding=0, dilation=1, preHookFx=None)[source]

This function behaves similar to slayer.spikeLayer.unpool(). The only difference is that the weights are qunatized with step of 2 (as is the case for signed weights in Loihi). One can, however, skip the quantization step altogether as well.

Arguments:

The arguments set is same as slayer.spikeLayer.unpool().

Usage:

Same as slayer.spikeLayer.pool()