Click here to flash read.
Link prediction is a common task on graph-structured data that has seen
applications in a variety of domains. Classically, hand-crafted heuristics were
used for this task. Heuristic measures are chosen such that they correlate well
with the underlying factors related to link formation. In recent years, a new
class of methods has emerged that combines the advantages of message-passing
neural networks (MPNN) and heuristics methods. These methods perform
predictions by using the output of an MPNN in conjunction with a "pairwise
encoding" that captures the relationship between nodes in the candidate link.
They have been shown to achieve strong performance on numerous datasets.
However, current pairwise encodings often contain a strong inductive bias,
using the same underlying factors to classify all links. This limits the
ability of existing methods to learn how to properly classify a variety of
different links that may form from different factors. To address this
limitation, we propose a new method, LPFormer, which attempts to adaptively
learn the pairwise encodings for each link. LPFormer models the link factors
via an attention module that learns the pairwise encoding that exists between
nodes by modeling multiple factors integral to link prediction. Extensive
experiments demonstrate that LPFormer can achieve SOTA performance on numerous
datasets while maintaining efficiency.
No creative common's license