×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Knowledge distillation (KD) has shown to be effective to boost the
performance of graph neural networks (GNNs), where the typical objective is to
distill knowledge from a deeper teacher GNN into a shallower student GNN.
However, it is often quite challenging to train a satisfactory deeper GNN due
to the well-known over-parametrized and over-smoothing issues, leading to
invalid knowledge transfer in practical applications. In this paper, we propose
the first Free-direction Knowledge Distillation framework via reinforcement
learning for GNNs, called FreeKD, which is no longer required to provide a
deeper well-optimized teacher GNN. Our core idea is to collaboratively learn
two shallower GNNs to exchange knowledge between them. As we observe that one
typical GNN model often exhibits better and worse performances at different
nodes during training, we devise a dynamic and free-direction knowledge
transfer strategy that involves two levels of actions: 1) node-level action
determines the directions of knowledge transfer between the corresponding nodes
of two networks; and then 2) structure-level action determines which of the
local structures generated by the node-level actions to be propagated.
Additionally, considering that different augmented graphs can potentially
capture distinct perspectives of the graph data, we propose FreeKD-Prompt that
learns undistorted and diverse augmentations based on prompt learning for
exchanging varied knowledge. Furthermore, instead of confining knowledge
exchange within two GNNs, we develop FreeKD++ to enable free-direction
knowledge transfer among multiple GNNs. Extensive experiments on five benchmark
datasets demonstrate our approaches outperform the base GNNs in a large margin.
More surprisingly, our FreeKD has comparable or even better performance than
traditional KD algorithms that distill knowledge from a deeper and stronger
teacher GNN.

Click here to read this post out
ID: 555243; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: Nov. 18, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 8
CC:
No creative common's license
Comments: