×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

3D object detection using point cloud (PC) data is essential for perception
pipelines of autonomous driving, where efficient encoding is key to meeting
stringent resource and latency requirements. PointPillars, a widely adopted
bird's-eye view (BEV) encoding, aggregates 3D point cloud data into 2D pillars
for fast and accurate 3D object detection. However, the state-of-the-art
methods employing PointPillars overlook the inherent sparsity of pillar
encoding where only a valid pillar is encoded with a vector of channel
elements, missing opportunities for significant computational reduction.
Meanwhile, current sparse convolution accelerators are designed to handle only
element-wise activation sparsity and do not effectively address the vector
sparsity imposed by pillar encoding.


In this paper, we propose SPADE, an algorithm-hardware co-design strategy to
maximize vector sparsity in pillar-based 3D object detection and accelerate
vector-sparse convolution commensurate with the improved sparsity. SPADE
consists of three components: (1) a dynamic vector pruning algorithm balancing
accuracy and computation savings from vector sparsity, (2) a sparse coordinate
management hardware transforming 2D systolic array into a vector-sparse
convolution accelerator, and (3) sparsity-aware dataflow optimization tailoring
sparse convolution schedules for hardware efficiency. Taped-out with a
commercial technology, SPADE saves the amount of computation by 36.3--89.2\%
for representative 3D object detection networks and benchmarks, leading to
1.3--10.9$\times$ speedup and 1.5--12.6$\times$ energy savings compared to the
ideal dense accelerator design. These sparsity-proportional performance gains
equate to 4.1--28.8$\times$ speedup and 90.2--372.3$\times$ energy savings
compared to the counterpart server and edge platforms.

Click here to read this post out
ID: 678054; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: Jan. 17, 2024, 7:32 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 9
CC:
No creative common's license
Comments: