Outlier Dimensions that Disrupt Transformers are Driven by Frequency

Research output: Contribution to conferencePaperResearchpeer-review

Standard

Outlier Dimensions that Disrupt Transformers are Driven by Frequency. / Puccetti, Giovanni; Rogers, Anna; Drozd, Aleksandr; Dell'Orletta, Felice.

2022. 1286-1304 Paper presented at 2022 Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates.

Research output: Contribution to conferencePaperResearchpeer-review

Harvard

Puccetti, G, Rogers, A, Drozd, A & Dell'Orletta, F 2022, 'Outlier Dimensions that Disrupt Transformers are Driven by Frequency', Paper presented at 2022 Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, 07/12/2022 - 11/12/2022 pp. 1286-1304.

APA

Puccetti, G., Rogers, A., Drozd, A., & Dell'Orletta, F. (2022). Outlier Dimensions that Disrupt Transformers are Driven by Frequency. 1286-1304. Paper presented at 2022 Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates.

Vancouver

Puccetti G, Rogers A, Drozd A, Dell'Orletta F. Outlier Dimensions that Disrupt Transformers are Driven by Frequency. 2022. Paper presented at 2022 Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates.

Author

Puccetti, Giovanni ; Rogers, Anna ; Drozd, Aleksandr ; Dell'Orletta, Felice. / Outlier Dimensions that Disrupt Transformers are Driven by Frequency. Paper presented at 2022 Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates.19 p.

Bibtex

@conference{441127ed55674b84b20fab6a1b51f416,
title = "Outlier Dimensions that Disrupt Transformers are Driven by Frequency",
abstract = "While Transformer-based language models are generally very robust to pruning, there is the recently discovered outlier phenomenon: disabling only 48 out of 110M parameters in BERT-base drops its performance by nearly 30% on MNLI. We replicate the original evidence for the outlier phenomenon and we link it to the geometry of the embedding space. We find that in both BERT and RoBERTa the magnitude of hidden state coefficients corresponding to outlier dimensions correlates with the frequency of encoded tokens in pre-training data, and it also contributes to the “vertical” self-attention pattern enabling the model to focus on the special tokens. This explains the drop in performance from disabling the outliers, and it suggests that to decrease anisotropicity in future models we need pre-training schemas that would better take into account the skewed token distributions.",
author = "Giovanni Puccetti and Anna Rogers and Aleksandr Drozd and Felice Dell'Orletta",
note = "Publisher Copyright: {\textcopyright} 2022 Association for Computational Linguistics.; 2022 Findings of the Association for Computational Linguistics: EMNLP 2022 ; Conference date: 07-12-2022 Through 11-12-2022",
year = "2022",
language = "English",
pages = "1286--1304",

}

RIS

TY - CONF

T1 - Outlier Dimensions that Disrupt Transformers are Driven by Frequency

AU - Puccetti, Giovanni

AU - Rogers, Anna

AU - Drozd, Aleksandr

AU - Dell'Orletta, Felice

N1 - Publisher Copyright: © 2022 Association for Computational Linguistics.

PY - 2022

Y1 - 2022

N2 - While Transformer-based language models are generally very robust to pruning, there is the recently discovered outlier phenomenon: disabling only 48 out of 110M parameters in BERT-base drops its performance by nearly 30% on MNLI. We replicate the original evidence for the outlier phenomenon and we link it to the geometry of the embedding space. We find that in both BERT and RoBERTa the magnitude of hidden state coefficients corresponding to outlier dimensions correlates with the frequency of encoded tokens in pre-training data, and it also contributes to the “vertical” self-attention pattern enabling the model to focus on the special tokens. This explains the drop in performance from disabling the outliers, and it suggests that to decrease anisotropicity in future models we need pre-training schemas that would better take into account the skewed token distributions.

AB - While Transformer-based language models are generally very robust to pruning, there is the recently discovered outlier phenomenon: disabling only 48 out of 110M parameters in BERT-base drops its performance by nearly 30% on MNLI. We replicate the original evidence for the outlier phenomenon and we link it to the geometry of the embedding space. We find that in both BERT and RoBERTa the magnitude of hidden state coefficients corresponding to outlier dimensions correlates with the frequency of encoded tokens in pre-training data, and it also contributes to the “vertical” self-attention pattern enabling the model to focus on the special tokens. This explains the drop in performance from disabling the outliers, and it suggests that to decrease anisotropicity in future models we need pre-training schemas that would better take into account the skewed token distributions.

UR - http://www.scopus.com/inward/record.url?scp=85144872662&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85144872662

SP - 1286

EP - 1304

T2 - 2022 Findings of the Association for Computational Linguistics: EMNLP 2022

Y2 - 7 December 2022 through 11 December 2022

ER -

ID: 347300129