Click here to flash read.
Deep neural networks (DNNs) utilized recently are physically deployed with
computational units (e.g., CPUs and GPUs). Such a design might lead to a heavy
computational burden, significant latency, and intensive power consumption,
which are critical limitations in applications such as the Internet of Things
(IoT), edge computing, and the usage of drones. Recent advances in optical
computational units (e.g., metamaterial) have shed light on energy-free and
light-speed neural networks. However, the digital design of the metamaterial
neural network (MNN) is fundamentally limited by its physical limitations, such
as precision, noise, and bandwidth during fabrication. Moreover, the unique
advantages of MNN's (e.g., light-speed computation) are not fully explored via
standard 3x3 convolution kernels. In this paper, we propose a novel large
kernel metamaterial neural network (LMNN) that maximizes the digital capacity
of the state-of-the-art (SOTA) MNN with model re-parametrization and network
compression, while also considering the optical limitation explicitly. The new
digital learning scheme can maximize the learning capacity of MNN while
modeling the physical restrictions of meta-optic. With the proposed LMNN, the
computation cost of the convolutional front-end can be offloaded into
fabricated optical hardware. The experimental results on two publicly available
datasets demonstrate that the optimized hybrid design improved classification
accuracy while reducing computational latency. The development of the proposed
LMNN is a promising step towards the ultimate goal of energy-free and
light-speed AI.
No creative common's license