Click here to flash read.
Compressing deep networks is highly desirable for practical use-cases in
computer vision applications. Several techniques have been explored in the
literature, and research has been done in finding efficient strategies for
combining them. For this project, we aimed to explore three different basic
compression techniques - knowledge distillation, pruning, and quantization for
small-scale recognition tasks. Along with the basic methods, we also test the
efficacy of combining them in a sequential manner. We analyze them using MNIST
and CIFAR-10 datasets and present the results along with few observations
inferred from them.
Click here to read this post out
ID: 129647; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: May 16, 2023, 7:31 a.m.
Changes:
Dictionaries:
Words:
Spaces:
Views: 22
CC:
No creative common's license
No creative common's license
Comments: