Sökning: "LaBSE"
Hittade 5 uppsatser innehållade ordet LaBSE.
1. Improving BERTScore for Machine Translation Evaluation Through Contrastive Learning
Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologiSammanfattning : Since the advent of automatic evaluation, tasks within Natural Language Processing (NLP), including Machine Translation, have been able to better utilize both time and labor resources. Later, multilingual pre-trained models (MLMs)have uplifted many languages’ capacity to participate in NLP research. LÄS MER
2. Optimering av en chattbot för det svenska språket
M1-uppsats, KTH/Hälsoinformatik och logistikSammanfattning : Chattbotutvecklare på Softronic använder i dagsläget Rasa-ramverket och dess standardkomponenter för bearbetning av användarinmatning. Det här är problematiskt då standardkomponenterna inte är optimerade för det svenska språket. LÄS MER
3. Comparing Text Classification Libraries in Scala and Python : A comparison of precision and recall
Kandidat-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)Sammanfattning : In today’s internet era, more text than ever is being uploaded online. The text comes in many forms, such as social media posts, business reviews, and many more. For various reasons, there is an interest in analyzing the uploaded text. For instance, an airline business could ask their customers to review the service they have received. LÄS MER
4. QPLaBSE: Quantized and Pruned Language-Agnostic BERT Sentence Embedding Model : Production-ready compression for multilingual transformers
Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)Sammanfattning : Transformer models perform well on Natural Language Processing and Natural Language Understanding tasks. Training and fine-tuning of these models consume a large amount of data and computing resources. Fast inference also requires high-end hardware for user-facing products. LÄS MER
5. DistillaBSE: Task-agnostic distillation of multilingual sentence embeddings : Exploring deep self-attention distillation with switch transformers
Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)Sammanfattning : The recent development of massive multilingual transformer networks has resulted in drastic improvements in model performance. These models, however, are so large they suffer from large inference latency and consume vast computing resources. Such features hinder widespread adoption of the models in industry and some academic settings. LÄS MER