5 datasets found

Keywords: RoBERTa

Filter Results
  • Slovenian RoBERTa contextual embeddings model: SloBERTa 2.0

    The monolingual Slovene RoBERTa (A Robustly Optimized Bidirectional Encoder Representations from Transformers) model is a state-of-the-art model representing words/tokens as...
  • Slovenian RoBERTa contextual embeddings model: SloBERTa 1.0

    The monolingual Slovene RoBERTa (A Robustly Optimized Bidirectional Encoder Representations from Transformers) model is a state-of-the-art model representing words/tokens as...
  • PyTorch model for Slovenian Named Entity Recognition SloNER 1.0

    The SloNER is a model for Slovenian Named Entity Recognition. It is is a PyTorch neural network model, intended for usage with the HuggingFace transformers library...
  • LitLat BERT

    Trilingual BERT-like (Bidirectional Encoder Representations from Transformers) model, trained on Lithuanian, Latvian, and English data. State of the art tool representing...
  • RobeCzech Base

    RobeCzech is a monolingual RoBERTa language representation model trained on Czech data. RoBERTa is a robustly optimized Transformer-based pretraining approach. We show that...
You can also access this registry using the API (see API Docs).