×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Unsupervised domain adaptation object detection(UDAOD) research on Detection
Transformer(DETR) mainly focuses on feature alignment and existing methods can
be divided into two kinds, each of which has its unresolved issues. One-stage
feature alignment methods can easily lead to performance fluctuation and
training stagnation. Two-stage feature alignment method based on mean teacher
comprises a pretraining stage followed by a self-training stage, each facing
problems in obtaining reliable pretrained model and achieving consistent
performance gains. Methods mentioned above have not yet explore how to utilize
the third related domain such as target-like domain to assist adaptation. To
address these issues, we propose a two-stage framework named MTM, i.e. Mean
Teacher-DETR with Masked Feature Alignment. In the pretraining stage, we
utilize labeled target-like images produced by image style transfer to avoid
performance fluctuation. In the self-training stage, we leverage unlabeled
target images by pseudo labels based on mean teacher and propose a module
called Object Queries Knowledge Transfer(OQKT) to ensure consistent performance
gains of the student model. Most importantly, we propose masked feature
alignment methods including Masked Domain Query-based Feature Alignment(MDQFA)
and Masked Token-wise Feature Alignment(MTWFA) to alleviate domain shift in a
more robust way, which not only prevent training stagnation and lead to a
robust pretrained model in the pretraining stage, but also enhance the model's
target performance in the self-training stage. Experiments on three challenging
scenarios and a theoretical analysis verify the effectiveness of MTM.

Click here to read this post out
ID: 610045; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: Dec. 13, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 11
CC:
No creative common's license
Comments: