Sökning: "Cross Lingual"

Visar resultat 1 - 5 av 37 uppsatser innehållade orden Cross Lingual.

  1. 1. Monolingual and Cross-Lingual Survey Response Annotation

    Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologi

    Författare :Yahui Zhao; [2023]
    Nyckelord :transfer learning; zero-shot cross-lingual transfer; model-based transfer; multilingual pre-trained language models; sequence labeling; open-ended questions; democracy;

    Sammanfattning : Multilingual natural language processing (NLP) is increasingly recognized for its potential in processing diverse text-type data, including those from social media, reviews, and technical reports. Multilingual language models like mBERT and XLM-RoBERTa (XLM-R) play a pivotal role in multilingual NLP. LÄS MER

  2. 2. Cross-Lingual and Genre-Supervised Parsing and Tagging for Low-Resource Spoken Data

    Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologi

    Författare :Iliana Fosteri; [2023]
    Nyckelord :dependency parsing; part-of-speech tagging; low-resource languages; transcribed speech; large language models; cross-lingual learning; transfer learning; multi-task learning; Universal Dependencies;

    Sammanfattning : Dealing with low-resource languages is a challenging task, because of the absence of sufficient data to train machine-learning models to make predictions on these languages. One way to deal with this problem is to use data from higher-resource languages, which enables the transfer of learning from these languages to the low-resource target ones. LÄS MER

  3. 3. Exploring Cross-Lingual Transfer Learning for Swedish Named Entity Recognition : Fine-tuning of English and Multilingual Pre-trained Models

    Kandidat-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Daniel Lai Wikström; Axel Sparr; [2023]
    Nyckelord :NER; Cross-lingual transfer; Transformer; BERT; Deep Learning; namnigenkänning; NER; multilingvistisk överföring; Transformer; BERT; deep learning;

    Sammanfattning : Named Entity Recognition (NER) is a critical task in Natural Language Processing (NLP), and recent advancements in language model pre-training have significantly improved its performance. However, this improvement is not universally applicable due to a lack of large pre-training datasets or computational budget for smaller languages. LÄS MER

  4. 4. Multilingual Transformer Models for Maltese Named Entity Recognition

    Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologi

    Författare :Kris Farrugia; [2022]
    Nyckelord :low-resource; named-entity; information extraction; Maltese;

    Sammanfattning : The recently developed state-of-the-art models for Named Entity Recognition are heavily dependent upon huge amounts of available annotated data. Consequently, it is extremely challenging for data-scarce languages to obtain significant result. LÄS MER

  5. 5. Distilling Multilingual Transformer Models for Efficient Document Retrieval : Distilling multi-Transformer models with distillation losses involving multi-Transformer interactions

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Xuecong Liu; [2022]
    Nyckelord :Dense Passage Retrieval; Knowledge Distillation; Multilingual Transformer; Document Retrieval; Open Domain Question Answering; Tät textavsnittssökning; kunskapsdestillering; flerspråkiga transformatorer; dokumentsökning; domänlöst frågebesvarande;

    Sammanfattning : Open Domain Question Answering (OpenQA) is a task concerning automatically finding answers to a query from a given set of documents. Language-agnostic OpenQA is an increasingly important research area in the globalised world, where the answers can be in a different language from the question. LÄS MER