×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

In this paper, we propose a novel cross-modal distillation method, called
TinyCLIP, for large-scale language-image pre-trained models. The method
introduces two core techniques: affinity mimicking and weight inheritance.
Affinity mimicking explores the interaction between modalities during
distillation, enabling student models to mimic teachers' behavior of learning
cross-modal feature alignment in a visual-linguistic affinity space. Weight
inheritance transmits the pre-trained weights from the teacher models to their
student counterparts to improve distillation efficiency. Moreover, we extend
the method into a multi-stage progressive distillation to mitigate the loss of
informative weights during extreme compression. Comprehensive experiments
demonstrate the efficacy of TinyCLIP, showing that it can reduce the size of
the pre-trained CLIP ViT-B/32 by 50%, while maintaining comparable zero-shot
performance. While aiming for comparable performance, distillation with weight
inheritance can speed up the training by 1.4 - 7.8 $\times$ compared to
training from scratch. Moreover, our TinyCLIP ViT-8M/16, trained on YFCC-15M,
achieves an impressive zero-shot top-1 accuracy of 41.1% on ImageNet,
surpassing the original CLIP ViT-B/16 by 3.5% while utilizing only 8.9%
parameters. Finally, we demonstrate the good transferability of TinyCLIP in
various downstream tasks. Code and models will be open-sourced at
https://aka.ms/tinyclip.

Click here to read this post out
ID: 422303; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: Sept. 24, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 9
CC:
No creative common's license
Comments: