×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Modern deep networks are highly complex and their inferential outcome very
hard to interpret. This is a serious obstacle to their transparent deployment
in safety-critical or bias-aware applications. This work contributes to
post-hoc interpretability, and specifically Network Dissection. Our goal is to
present a framework that makes it easier to discover the individual
functionality of each neuron in a network trained on a vision task; discovery
is performed in terms of textual description generation. To achieve this
objective, we leverage: (i) recent advances in multimodal vision-text models
and (ii) network layers founded upon the novel concept of stochastic local
competition between linear units. In this setting, only a small subset of layer
neurons are activated for a given input, leading to extremely high activation
sparsity (as low as only $\approx 4\%$). Crucially, our proposed method infers
(sparse) neuron activation patterns that enables the neurons to
activate/specialize to inputs with specific characteristics, diversifying their
individual functionality. This capacity of our method supercharges the
potential of dissection processes: human understandable descriptions are
generated only for the very few active neurons, thus facilitating the direct
investigation of the network's decision process. As we experimentally show, our
approach: (i) yields Vision Networks that retain or improve classification
performance, and (ii) realizes a principled framework for text-based
description and examination of the generated neuronal representations.

Click here to read this post out
ID: 462211; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: Oct. 10, 2023, 7:35 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 13
CC:
No creative common's license
Comments: