×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Creating pose-driven human avatars is about modeling the mapping from the
low-frequency driving pose to high-frequency dynamic human appearances, so an
effective pose encoding method that can encode high-fidelity human details is
essential to human avatar modeling. To this end, we present PoseVocab, a novel
pose encoding method that encourages the network to discover the optimal pose
embeddings for learning the dynamic human appearance. Given multi-view RGB
videos of a character, PoseVocab constructs key poses and latent embeddings
based on the training poses. To achieve pose generalization and temporal
consistency, we sample key rotations in $so(3)$ of each joint rather than the
global pose vectors, and assign a pose embedding to each sampled key rotation.
These joint-structured pose embeddings not only encode the dynamic appearances
under different key poses, but also factorize the global pose embedding into
joint-structured ones to better learn the appearance variation related to the
motion of each joint. To improve the representation ability of the pose
embedding while maintaining memory efficiency, we introduce feature lines, a
compact yet effective 3D representation, to model more fine-grained details of
human appearances. Furthermore, given a query pose and a spatial position, a
hierarchical query strategy is introduced to interpolate pose embeddings and
acquire the conditional pose feature for dynamic human synthesis. Overall,
PoseVocab effectively encodes the dynamic details of human appearance and
enables realistic and generalized animation under novel poses. Experiments show
that our method outperforms other state-of-the-art baselines both qualitatively
and quantitatively in terms of synthesis quality. Code is available at
https://github.com/lizhe00/PoseVocab.

Click here to read this post out
ID: 129986; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: May 16, 2023, 7:32 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 12
CC:
No creative common's license
Comments: