Sökning: "multilingual pre-trained language models"
Visar resultat 1 - 5 av 16 uppsatser innehållade orden multilingual pre-trained language models.
1. Ensuring Brand Safety by Using Contextual Text Features: A Study of Text Classification with BERT
Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologiSammanfattning : When advertisements are placed on web pages, the context in which the advertisements are presented is important. For example, manufacturers of kitchen knives may not want their advertisement to appear in a news article about a knife-wielding murderer. LÄS MER
2. Monolingual and Cross-Lingual Survey Response Annotation
Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologiSammanfattning : Multilingual natural language processing (NLP) is increasingly recognized for its potential in processing diverse text-type data, including those from social media, reviews, and technical reports. Multilingual language models like mBERT and XLM-RoBERTa (XLM-R) play a pivotal role in multilingual NLP. LÄS MER
3. Exploring Cross-Lingual Transfer Learning for Swedish Named Entity Recognition : Fine-tuning of English and Multilingual Pre-trained Models
Kandidat-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)Sammanfattning : Named Entity Recognition (NER) is a critical task in Natural Language Processing (NLP), and recent advancements in language model pre-training have significantly improved its performance. However, this improvement is not universally applicable due to a lack of large pre-training datasets or computational budget for smaller languages. LÄS MER
4. Can Wizards be Polyglots: Towards a Multilingual Knowledge-grounded Dialogue System
Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologiSammanfattning : The research of open-domain, knowledge-grounded dialogue systems has been advancing rapidly due to the paradigm shift introduced by large language models (LLMs). While the strides have improved the performance of the dialogue systems, the scope is mostly monolingual and English-centric. LÄS MER
5. Task-agnostic knowledge distillation of mBERT to Swedish
Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)Sammanfattning : Large transformer models have shown great performance in multiple natural language processing tasks. However, slow inference, strong dependency on powerful hardware, and large energy consumption limit their availability. LÄS MER