Sökning: "XLM-R"

Visar resultat 1 - 5 av 10 uppsatser innehållade ordet XLM-R.

  1. 1. Monolingual and Cross-Lingual Survey Response Annotation

    Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologi

    Författare :Yahui Zhao; [2023]
    Nyckelord :transfer learning; zero-shot cross-lingual transfer; model-based transfer; multilingual pre-trained language models; sequence labeling; open-ended questions; democracy;

    Sammanfattning : Multilingual natural language processing (NLP) is increasingly recognized for its potential in processing diverse text-type data, including those from social media, reviews, and technical reports. Multilingual language models like mBERT and XLM-RoBERTa (XLM-R) play a pivotal role in multilingual NLP. LÄS MER

  2. 2. BERTie Bott’s Every Flavor Labels : A Tasty Guide to Developing a Semantic Role Labeling Model for Galician

    Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologi

    Författare :Micaella Bruton; [2023]
    Nyckelord :natural language processing; NLP; Galician; low-resource language; low resource language; semantic role labeling; SRL; mBERT; XLM-R; transfer-learning; transfer learning; Spanish; verbal indexing; procesamento de linguaxe natural; NLP; Galego; lingua de recursos limitados; etiquetado de papeis semánticos; SRL; mBERT; XLM-R; aprendizaxe por transferencia; Español; indexación verbal; språkteknologiska verktyg; NLP; naturlig språkbehandling; galiciska; språk med begränsade resurser; semantisk rollmärkning; SRL; mBERT; XLM-R; överföringsinlärning; spanska; verbal indexering; verbalindexering; procesamiento del lenguaje natural; NLP; Gallego; idioma de bajos recursos; etiquetado de roles semánticos; SRL; mBERT; XLM-R; aprendizaje por transferencia; Español; indexación verbal;

    Sammanfattning : For the vast majority of languages, Natural Language Processing (NLP) tools are either absent entirely, or leave much to be desired in their final performance. Despite having nearly 4 million speakers, one such low-resource language is Galician. LÄS MER

  3. 3. Cross-Lingual and Genre-Supervised Parsing and Tagging for Low-Resource Spoken Data

    Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologi

    Författare :Iliana Fosteri; [2023]
    Nyckelord :dependency parsing; part-of-speech tagging; low-resource languages; transcribed speech; large language models; cross-lingual learning; transfer learning; multi-task learning; Universal Dependencies;

    Sammanfattning : Dealing with low-resource languages is a challenging task, because of the absence of sufficient data to train machine-learning models to make predictions on these languages. One way to deal with this problem is to use data from higher-resource languages, which enables the transfer of learning from these languages to the low-resource target ones. LÄS MER

  4. 4. Multilingual Transformer Models for Maltese Named Entity Recognition

    Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologi

    Författare :Kris Farrugia; [2022]
    Nyckelord :low-resource; named-entity; information extraction; Maltese;

    Sammanfattning : The recently developed state-of-the-art models for Named Entity Recognition are heavily dependent upon huge amounts of available annotated data. Consequently, it is extremely challenging for data-scarce languages to obtain significant result. LÄS MER

  5. 5. Neural Dependency Parsing of Low-resource Languages: A Case Study on Marathi

    Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologi

    Författare :Wenwen Zhang; [2022]
    Nyckelord :Dependency Parsing; Low-resource Languages; BERT;

    Sammanfattning : Cross-lingual transfer has been shown effective for dependency parsing of some low-resource languages. It typically requires closely related high-resource languages. Pre-trained deep language models significantly improve model performance in cross-lingual tasks. LÄS MER