Sökning: "sentence embeddings"
Visar resultat 21 - 25 av 37 uppsatser innehållade orden sentence embeddings.
21. Analyzing the Anisotropy Phenomenon in Transformer-based Masked Language Models
Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologiSammanfattning : In this thesis, we examine the anisotropy phenomenon in popular masked language models, BERT and RoBERTa, in detail. We propose a possible explanation for this unreasonable phenomenon. LÄS MER
22. Exploring Transformer-Based Contextual Knowledge Graph Embeddings : How the Design of the Attention Mask and the Input Structure Affect Learning in Transformer Models
Master-uppsats, Linköpings universitet/Artificiell intelligens och integrerade datorsystemSammanfattning : The availability and use of knowledge graphs have become commonplace as a compact storage of information and for lookup of facts. However, the discrete representation makes the knowledge graph unavailable for tasks that need a continuous representation, such as predicting relationships between entities, where the most probable relationship needs to be found. LÄS MER
23. Multilingual Zero-Shot and Few-Shot Causality Detection
Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologiSammanfattning : Relations that hold between causes and their effects are fundamental for a wide range of different sectors. Automatically finding sentences that express such relations may for example be of great interest for the economy or political institutions. LÄS MER
24. Automatic Question Paraphrasing in Swedish with Deep Generative Models
Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)Sammanfattning : Paraphrase generation refers to the task of automatically generating a paraphrase given an input sentence or text. Paraphrase generation is a fundamental yet challenging natural language processing (NLP) task and is utilized in a variety of applications such as question answering, information retrieval, conversational systems etc. LÄS MER
25. DistillaBSE: Task-agnostic distillation of multilingual sentence embeddings : Exploring deep self-attention distillation with switch transformers
Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)Sammanfattning : The recent development of massive multilingual transformer networks has resulted in drastic improvements in model performance. These models, however, are so large they suffer from large inference latency and consume vast computing resources. Such features hinder widespread adoption of the models in industry and some academic settings. LÄS MER