Click here to flash read.
Video analysis is a major computer vision task that has received a lot of
attention in recent years. The current state-of-the-art performance for video
analysis is achieved with Deep Neural Networks (DNNs) that have high
computational costs and need large amounts of labeled data for training.
Spiking Neural Networks (SNNs) have significantly lower computational costs
(thousands of times) than regular non-spiking networks when implemented on
neuromorphic hardware. They have been used for video analysis with methods like
3D Convolutional Spiking Neural Networks (3D CSNNs). However, these networks
have a significantly larger number of parameters compared with spiking 2D CSNN.
This, not only increases the computational costs, but also makes these networks
more difficult to implement with neuromorphic hardware. In this work, we use
CSNNs trained in an unsupervised manner with the Spike Timing-Dependent
Plasticity (STDP) rule, and we introduce, for the first time, Spiking Separated
Spatial and Temporal Convolutions (S3TCs) for the sake of reducing the number
of parameters required for video analysis. This unsupervised learning has the
advantage of not needing large amounts of labeled data for training.
Factorizing a single spatio-temporal spiking convolution into a spatial and a
temporal spiking convolution decreases the number of parameters of the network.
We test our network with the KTH, Weizmann, and IXMAS datasets, and we show
that S3TCs successfully extract spatio-temporal information from videos, while
increasing the output spiking activity, and outperforming spiking 3D
convolutions.
No creative common's license