Sökning: "Pre-trained Language Model"

Visar resultat 1 - 5 av 105 uppsatser innehållade orden Pre-trained Language Model.

  1. 1. Comparison of VADER and Pre-Trained RoBERTa: A Sentiment Analysis Application

    Kandidat-uppsats, Lunds universitet/Statistiska institutionen

    Författare :Linda Erwe; Xin Wang; [2024]
    Nyckelord :sentiment analysis; natural language processing; BERT; VADER; sustainability report; Mathematics and Statistics;

    Sammanfattning : Purpose: The purpose of this study is to examine how the overall sentiment results from VADER and a pre-trained RoBERTa model differ. The study investigates potential differences in terms of the median and shape of the two distributions. Data: The sustainability reports of 50 independent random companies are selected as the sample. LÄS MER

  2. 2. An In-Depth study on the Utilization of Large Language Models for Test Case Generation

    Master-uppsats, Umeå universitet/Institutionen för datavetenskap

    Författare :Nicole Johnsson; [2024]
    Nyckelord :Large Language Models; Test Case Generation; Retrieval Augmented Generation; Machine Learning; Generative AI;

    Sammanfattning : This study investigates the utilization of Large Language Models for Test Case Generation. The study uses the Large Language model and Embedding model provided by Llama, specifically Llama2 of size 7B, to generate test cases given a defined input. LÄS MER

  3. 3. Self-Supervised Learning for Tabular Data: Analysing VIME and introducing Mix Encoder

    Kandidat-uppsats, Lunds universitet/Fysiska institutionen

    Författare :Max Svensson; [2024]
    Nyckelord :Machine Learning; Self-supervised learning; AI; Physics; Medicine; Physics and Astronomy;

    Sammanfattning : We introduce Mix Encoder, a novel self-supervised learning framework for deep tabular data models based on Mixup [1]. Mix Encoder uses linear interpolations of samples with associated pretext tasks to form useful pre-trained representations. LÄS MER

  4. 4. Nested Noun Phrase Detection in English Text with BERT

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Shweta Misra; [2023]
    Nyckelord :Phrase detection; nested noun phrase identification; phrase structure identification; sentence parsing; transformer models; machine learning; natural language processing; Frasdetektering; kapslad substantivfrasidentifiering; frasstrukturidentifiering; meningsanalys; transformers-modeller; maskininlärning; naturlig språkbehandling;

    Sammanfattning : In this project, we address the task of nested noun phrase identification in English sentences, where a phrase is defined as a group of words functioning as one unit in a sentence. Prior research has extensively explored the identification of various phrases for language understanding and text generation tasks. LÄS MER

  5. 5. Information Extraction for Test Identification in Repair Reports in the Automotive Domain

    Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologi

    Författare :Huang Jie; [2023]
    Nyckelord :text classification; information retrieval; contrastive learning; prompt-based fine-tuning; large language models;

    Sammanfattning : The knowledge of tests conducted on a problematic vehicle is essential for enhancing the efficiency of mechanics. Therefore, identifying the tests performed in each repair case is of utmost importance. This thesis explores techniques for extracting data from unstructured repair reports to identify component tests. LÄS MER