Skip to main content
. Author manuscript; available in PMC: 2021 Nov 5.
Published in final edited form as: Proc AAAI Conf Artif Intell. 2021 May 18;35(16):14138–14148.

Table 2:

Results on natural language understanding tasks. We report F1 score for MRPC and QQP and accuracy for others. Our Nyströmformer performs competitively with BERT-base.

Model SST-2 MRPC QNLI QQP MNLI m/mm IMDB
BERT-base 90.0 88.4 90.3 87.3 82.4/82.4 93.3
Nyströmformer 91.4 88.1 88.7 86.3 80.9/82.2 93.2