6 different fine-tuned Transformer-based models that solve the downstream task of extractive question answering in the Slovenian language. The fine-tuned models included are: bert-base-cased-squad2-SLO, bert-base-multilingual-cased-squad2-SLO, electra-base-squad2-SLO, roberta-base-squad2-SLO, sloberta-squad2-SLO and xlm-roberta-base-squad2-SLO. The models were trained and evaluated using the Slovene translation of the SQuAD2.0 dataset (https://www.clarin.si/repository/xmlui/handle/11356/1756).
The models achieve these metric values:
sloberta-squad2-SLO: EM=67.1, F1=73.56
xlm-roberta-base-squad2-SLO: EM=62.52, F1=69.51
bert-base-multilingual-cased-squad2-SLO: EM=61.37, F1=68.1
roberta-base-squad2-SLO: EM=58.23, F1=64.62
bert-base-cased-squad2-SLO: EM=55.12, F1=60.52
electra-base-squad2-SLO: EM=53.69, F1=60.85