×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

In disentangled representation learning, a model is asked to tease apart a
dataset's underlying sources of variation and represent them independently of
one another. Since the model is provided with no ground truth information about
these sources, inductive biases take a paramount role in enabling
disentanglement. In this work, we construct an inductive bias towards encoding
to and decoding from an organized latent space. Concretely, we do this by (i)
quantizing the latent space into discrete code vectors with a separate
learnable scalar codebook per dimension and (ii) applying strong model
regularization via an unusually high weight decay. Intuitively, the latent
space design forces the encoder to combinatorially construct codes from a small
number of distinct scalar values, which in turn enables the decoder to assign a
consistent meaning to each value. Regularization then serves to drive the model
towards this parsimonious strategy. We demonstrate the broad applicability of
this approach by adding it to both basic data-reconstructing (vanilla
autoencoder) and latent-reconstructing (InfoGAN) generative models. For
reliable evaluation, we also propose InfoMEC, a new set of metrics for
disentanglement that is cohesively grounded in information theory and fixes
well-established shortcomings in previous metrics. Together with
regularization, latent quantization dramatically improves the modularity and
explicitness of learned representations on a representative suite of benchmark
datasets. In particular, our quantized-latent autoencoder (QLAE) consistently
outperforms strong methods from prior work in these key disentanglement
properties without compromising data reconstruction.

Click here to read this post out
ID: 336754; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: Aug. 16, 2023, 7:33 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 8
CC:
No creative common's license
Comments: