Sökning: "Transformer-XL"

Hittade 2 uppsatser innehållade ordet Transformer-XL.

  1. 1. Large-Context Question Answering with Cross-Lingual Transfer

    Master-uppsats, Uppsala universitet/Institutionen för informationsteknologi

    Författare :Markus Sagen; [2021]
    Nyckelord :Long-Context Multilingual Model; Longformer XLM-R; Longformer; Long-term Context; Extending Context; Extend Context; Large-Context; Long-Context; Large Context; Long Context; Cross-Lingual; Multi-Lingual; Cross Lingual; Multi Lingual; QA; Question-Answering; Question Answering; Transformer model; Machine Learning; Transfer Learning; SQuAD; Memory; Transfer Learning; Long-Context; Long Context; Efficient; Monolingual; Multilingual; QA model; Language Model; Huggingface; BERT; RoBERTa; XLM-R; mBERT; Multilingual BERT; Efficient Transformers; Reformer; Linformer; Performer; Transformer-XL; Wikitext-103; TriviaQA; HotpotQA; WikiHopQA; VINNOVA; Peltarion; AI; LM; MLM; Deep Learning; Natural Language Processing; NLP; Attention; Transformers; Transfer Learning; Datasets;

    Sammanfattning : Models based around the transformer architecture have become one of the most prominent for solving a multitude of natural language processing (NLP)tasks since its introduction in 2017. However, much research related to the transformer model has focused primarily on achieving high performance and many problems remain unsolved. LÄS MER

  2. 2. Using Bidirectional Encoder Representations from Transformers for Conversational Machine Comprehension

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Evangelina Gogoulou; [2019]
    Nyckelord :conversational machine comprehension; question answering; transformers; self-attention; language modelling; samtalsmaskinförståelse; frågesvar; transformatorer; självuppmärksamhet; språkmodellering;

    Sammanfattning : Bidirectional Encoder Representations from Transformers (BERT) is a recently proposed language representation model, designed to pre-train deep bidirectional representations, with the goal of extracting context-sensitive features from an input text [1]. One of the challenging problems in the field of Natural Language Processing is Conversational Machine Comprehension (CMC). LÄS MER