Click here to flash read.
Transformer-based language models (LMs) create hidden representations of
their inputs at every layer, but only use final-layer representations for
prediction. This obscures the internal decision-making process of the model and
the utility of its intermediate representations. One way to elucidate this is
to cast the hidden representations as final representations, bypassing the
transformer computation in-between. In this work, we suggest a simple method
for such casting, by using linear transformations. We show that our approach
produces more accurate approximations than the prevailing practice of
inspecting hidden representations from all layers in the space of the final
layer. Moreover, in the context of language modeling, our method allows
"peeking" into early layer representations of GPT-2 and BERT, showing that
often LMs already predict the final output in early layers. We then demonstrate
the practicality of our method to recent early exit strategies, showing that
when aiming, for example, at retention of 95% accuracy, our approach saves
additional 7.9% layers for GPT-2 and 5.4% layers for BERT, on top of the
savings of the original approach. Last, we extend our method to linearly
approximate sub-modules, finding that attention is most tolerant to this
change.
No creative common's license