×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Deep convolutional neural networks are shown to be overkill with high
parametric and computational redundancy in many application scenarios, and an
increasing number of works have explored model pruning to obtain lightweight
and efficient networks. However, most existing pruning approaches are driven by
empirical heuristic and rarely consider the joint impact of channels, leading
to unguaranteed and suboptimal performance. In this paper, we propose a novel
channel pruning method via Class-Aware Trace Ratio Optimization (CATRO) to
reduce the computational burden and accelerate the model inference. Utilizing
class information from a few samples, CATRO measures the joint impact of
multiple channels by feature space discriminations and consolidates the
layer-wise impact of preserved channels. By formulating channel pruning as a
submodular set function maximization problem, CATRO solves it efficiently via a
two-stage greedy iterative optimization procedure. More importantly, we present
theoretical justifications on convergence of CATRO and performance of pruned
networks. Experimental results demonstrate that CATRO achieves higher accuracy
with similar computation cost or lower computation cost with similar accuracy
than other state-of-the-art channel pruning algorithms. In addition, because of
its class-aware property, CATRO is suitable to prune efficient networks
adaptively for various classification subtasks, enhancing handy deployment and
usage of deep networks in real-world applications.

Click here to read this post out
ID: 33603; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: April 1, 2023, 7:34 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 7
CC:
No creative common's license
Comments: