Design of spiking neural network architecture based on dendritic computation principles

Cover Page


Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription or Fee Access

Abstract

The paper presents the hardware architecture design of a spiking neural network (SNN) based on dendritic computation principles. The integration of active dendritic properties into the neuronal structure of SNN aims to minimize the number of functional blocks required for hardware implementation, including synaptic connections and neurons. The available memory on the neuromorphic architecture imposes limitations on implementation, hence the need to reduce the number of functional blocks.

As a test task for the SNN based on dendritic computations, we selected the image classification of eight symbols, consisting of digits one through eight. These symbols are depicted as 3×7 pixel, 1-bit images.

Active dendritic properties were analyzed using the “delay plasticity” [1] principle, which introduces the mechanism of adjusting input signal delays in spiking neuron inputs. We designed an SNN model with complementary delay inputs, referred to as the active dendrite SNN, as a principle implementation. Input spikes arriving at the primary inputs are duplicated to the delay inputs after a modifiable time delay. For convenience, each delay input was set at a single value.

The input images were scanned sequentially. The neural network received three direct and three inverse inputs from the six main inputs that were coded with spikes corresponding to three pixels of a string. An “on” pixel was coded with a spike arriving at a direct input, while an “off” pixel was coded with a spike arriving at the corresponding inverse input. The line scanning time was 10 μs, input width was 1 μs, and delay time was 5 μs.

The optimization of spiking neuron parameters was performed through a stochastic search algorithm based on simulated annealing. The parameters optimized for the Leaky-Integrate-and-Fire (LIF) neurons included the leakage time constant (22.8 μs), firing threshold (1150 arbitrary units), and refractory period (1 μs).

The active dendrite SNN training employed the tempotron learning rule [2]. The training optimized the following parameters: the maximum change in synaptic weight on potentiation and depression (0.7 and –3 arbitrary units, respectively) and the synaptic weight’s upper bound (195 arbitrary units).

Complementary delayed inputs facilitated the learning of the order in which input patterns arrived for SNN neurons during training.

The paper compares an SNN architecture based on dendritic computations to our previously designed two-layer SNN with a hidden perceptron layer and an output layer consisting of LIF neurons [3].

Using the same LIF neuron design, input image coding, and LIF neuron layer structure as in the proposed architecture, our two-layer SNN with a hidden perceptron layer and output layer of LIF neurons successfully recognized 3×5 images of three symbols with only 10 neurons and 63 synapses. Alternatively, the active dendrite SNN was able to recognize 3×7 images of eight symbols with four neurons and 48 synaptic weights.

In conclusion, incorporating active dendrite properties into the SNN architecture for image recognition resulted in optimized functional block usage, lowering the number of neurons and synapses used by 60 and 24%, respectively.

Full Text

The paper presents the hardware architecture design of a spiking neural network (SNN) based on dendritic computation principles. The integration of active dendritic properties into the neuronal structure of SNN aims to minimize the number of functional blocks required for hardware implementation, including synaptic connections and neurons. The available memory on the neuromorphic architecture imposes limitations on implementation, hence the need to reduce the number of functional blocks.

As a test task for the SNN based on dendritic computations, we selected the image classification of eight symbols, consisting of digits one through eight. These symbols are depicted as 3×7 pixel, 1-bit images.

Active dendritic properties were analyzed using the “delay plasticity” [1] principle, which introduces the mechanism of adjusting input signal delays in spiking neuron inputs. We designed an SNN model with complementary delay inputs, referred to as the active dendrite SNN, as a principle implementation. Input spikes arriving at the primary inputs are duplicated to the delay inputs after a modifiable time delay. For convenience, each delay input was set at a single value.

The input images were scanned sequentially. The neural network received three direct and three inverse inputs from the six main inputs that were coded with spikes corresponding to three pixels of a string. An “on” pixel was coded with a spike arriving at a direct input, while an “off” pixel was coded with a spike arriving at the corresponding inverse input. The line scanning time was 10 μs, input width was 1 μs, and delay time was 5 μs.

The optimization of spiking neuron parameters was performed through a stochastic search algorithm based on simulated annealing. The parameters optimized for the Leaky-Integrate-and-Fire (LIF) neurons included the leakage time constant (22.8 μs), firing threshold (1150 arbitrary units), and refractory period (1 μs).

The active dendrite SNN training employed the tempotron learning rule [2]. The training optimized the following parameters: the maximum change in synaptic weight on potentiation and depression (0.7 and –3 arbitrary units, respectively) and the synaptic weight’s upper bound (195 arbitrary units).

Complementary delayed inputs facilitated the learning of the order in which input patterns arrived for SNN neurons during training.

The paper compares an SNN architecture based on dendritic computations to our previously designed two-layer SNN with a hidden perceptron layer and an output layer consisting of LIF neurons [3].

Using the same LIF neuron design, input image coding, and LIF neuron layer structure as in the proposed architecture, our two-layer SNN with a hidden perceptron layer and output layer of LIF neurons successfully recognized 3×5 images of three symbols with only 10 neurons and 63 synapses. Alternatively, the active dendrite SNN was able to recognize 3×7 images of eight symbols with four neurons and 48 synaptic weights.

In conclusion, incorporating active dendrite properties into the SNN architecture for image recognition resulted in optimized functional block usage, lowering the number of neurons and synapses used by 60 and 24%, respectively.

ADDITIONAL INFORMATION

Authors’ contribution. All authors made a substantial contribution to the conception of the work, acquisition, analysis, interpretation of data for the work, drafting and revising the work, final approval of the version to be published and agree to be accountable for all aspects of the work.

Funding sources. This work was supported by the Ministry of Science and Higher Education of Russian Federation, grant No. FSEE-2020-0013.

Competing interests. The authors declare that they have no competing interests.

×

About the authors

I. A. Mavrin

Saint Petersburg Electrotechnical University “LETI”

Author for correspondence.
Email: iamavrin@etu.ru
Russian Federation, Saint Petersburg

E. A. Ryndin

Saint Petersburg Electrotechnical University “LETI”

Email: iamavrin@etu.ru
Russian Federation, Saint Petersburg

N. V. Andreeva

Saint Petersburg Electrotechnical University “LETI”

Email: iamavrin@etu.ru
Russian Federation, Saint Petersburg

V. V. Luchinin

Saint Petersburg Electrotechnical University “LETI”

Email: iamavrin@etu.ru
Russian Federation, Saint Petersburg

References

  1. Acharya J, Basu A, Legenstein R, et al. Dendritic computing: branching deeper into machine learning. Neuroscience. 2022;489:275–289. doi: 10.1016/j.neuroscience.2021.10.001
  2. Gütig R, Sompolinsky H. The tempotron: a neuron that learns spike timing–based decisions. Nat Neurosci. 2006;9(3):420–428. doi: 10.1038/nn1643
  3. Ryndin EA, Mavrin IA, Andreeva NV, Luchinin VV. Neuromorphic electronic module, focused on the use of memristor ecb, for image recognition. Nano- and microsystems technology. 2022;24(6):293–303. doi: 10.17587/nmst.24.293-303

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2023 Eco-Vector

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

СМИ зарегистрировано Федеральной службой по надзору в сфере связи, информационных технологий и массовых коммуникаций (Роскомнадзор).
Регистрационный номер и дата принятия решения о регистрации СМИ: 

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies