Outlier Dimensions that Disrupt Transformers are Driven by Frequency
Research output: Contribution to conference › Paper › Research › peer-review
Standard
Outlier Dimensions that Disrupt Transformers are Driven by Frequency. / Puccetti, Giovanni; Rogers, Anna; Drozd, Aleksandr; Dell'Orletta, Felice.
2022. 1286-1304 Paper presented at 2022 Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates.Research output: Contribution to conference › Paper › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - CONF
T1 - Outlier Dimensions that Disrupt Transformers are Driven by Frequency
AU - Puccetti, Giovanni
AU - Rogers, Anna
AU - Drozd, Aleksandr
AU - Dell'Orletta, Felice
N1 - Publisher Copyright: © 2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - While Transformer-based language models are generally very robust to pruning, there is the recently discovered outlier phenomenon: disabling only 48 out of 110M parameters in BERT-base drops its performance by nearly 30% on MNLI. We replicate the original evidence for the outlier phenomenon and we link it to the geometry of the embedding space. We find that in both BERT and RoBERTa the magnitude of hidden state coefficients corresponding to outlier dimensions correlates with the frequency of encoded tokens in pre-training data, and it also contributes to the “vertical” self-attention pattern enabling the model to focus on the special tokens. This explains the drop in performance from disabling the outliers, and it suggests that to decrease anisotropicity in future models we need pre-training schemas that would better take into account the skewed token distributions.
AB - While Transformer-based language models are generally very robust to pruning, there is the recently discovered outlier phenomenon: disabling only 48 out of 110M parameters in BERT-base drops its performance by nearly 30% on MNLI. We replicate the original evidence for the outlier phenomenon and we link it to the geometry of the embedding space. We find that in both BERT and RoBERTa the magnitude of hidden state coefficients corresponding to outlier dimensions correlates with the frequency of encoded tokens in pre-training data, and it also contributes to the “vertical” self-attention pattern enabling the model to focus on the special tokens. This explains the drop in performance from disabling the outliers, and it suggests that to decrease anisotropicity in future models we need pre-training schemas that would better take into account the skewed token distributions.
UR - http://www.scopus.com/inward/record.url?scp=85144872662&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85144872662
SP - 1286
EP - 1304
T2 - 2022 Findings of the Association for Computational Linguistics: EMNLP 2022
Y2 - 7 December 2022 through 11 December 2022
ER -
ID: 347300129