Quantize module

slayerSNN.quantizeParams.quantize(weights, step=1)[source]

This function provides a wrapper around quantizeWeights.

Arguments;
  • weights: full precision weight tensor.

  • step: quantization step size. Default: 1

Usage:

>>> # Quantize weights in step of 0.5
>>> stepWeights = quantize(fullWeights, step=0.5)
class slayerSNN.quantizeParams.quantizeWeights[source]

This class provides routine to quantize the weights during forward propagation pipeline. The backward propagation pipeline passes the gradient as it it, without any modification.

Arguments;
  • weights: full precision weight tensor.

  • step: quantization step size. Default: 1

Usage:

>>> # Quantize weights in step of 0.5
>>> stepWeights = quantizeWeights.apply(fullWeights, 0.5)
static backward(ctx, gradOutput)[source]
static forward(ctx, weights, step=1)[source]