×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Learning discriminative task-specific features simultaneously for multiple
distinct tasks is a fundamental problem in multi-task learning. Recent
state-of-the-art models consider directly decoding task-specific features from
one shared task-generic feature (e.g., feature from a backbone layer), and
utilize carefully designed decoders to produce multi-task features. However, as
the input feature is fully shared and each task decoder also shares decoding
parameters for different input samples, it leads to a static feature decoding
process, producing less discriminative task-specific representations. To tackle
this limitation, we propose TaskExpert, a novel multi-task mixture-of-experts
model that enables learning multiple representative task-generic feature spaces
and decoding task-specific features in a dynamic manner. Specifically,
TaskExpert introduces a set of expert networks to decompose the backbone
feature into several representative task-generic features. Then, the
task-specific features are decoded by using dynamic task-specific gating
networks operating on the decomposed task-generic features. Furthermore, to
establish long-range modeling of the task-specific representations from
different layers of TaskExpert, we design a multi-task feature memory that
updates at each layer and acts as an additional feature expert for dynamic
task-specific feature decoding. Extensive experiments demonstrate that our
TaskExpert clearly outperforms previous best-performing methods on all 9
metrics of two competitive multi-task learning benchmarks for visual scene
understanding (i.e., PASCAL-Context and NYUD-v2). Codes and models will be made
publicly available at https://github.com/prismformore/Multi-Task-Transformer

Click here to read this post out
ID: 301631; Unique Viewers: 0
Voters: 0
Latest Change: July 31, 2023, 7:30 a.m. Changes:
Dictionaries:
Words:
Spaces:
Comments:
Newcom
<0:100>