Sökning: "Pre-trained model"

Visar resultat 1 - 5 av 221 uppsatser innehållade orden Pre-trained model.

  1. 1. Using Synthetic Data For Object Detection on the edge in Hazardous Environments

    Uppsats för yrkesexamina på avancerad nivå, Lunds universitet/Institutionen för reglerteknik

    Författare :Faraz Azarnoush; Damil Sabotic; [2024]
    Nyckelord :Technology and Engineering;

    Sammanfattning : This thesis aims to evaluate which aspects are important when generating synthetic data with the purpose of running on a lightweight object detection model on an edge device. The task we constructed was to detect Canisters and whether they feature a protective valve called a Cap or not (called a No-Cap). LÄS MER

  2. 2. Comparison of VADER and Pre-Trained RoBERTa: A Sentiment Analysis Application

    Kandidat-uppsats, Lunds universitet/Statistiska institutionen

    Författare :Linda Erwe; Xin Wang; [2024]
    Nyckelord :sentiment analysis; natural language processing; BERT; VADER; sustainability report; Mathematics and Statistics;

    Sammanfattning : Purpose: The purpose of this study is to examine how the overall sentiment results from VADER and a pre-trained RoBERTa model differ. The study investigates potential differences in terms of the median and shape of the two distributions. Data: The sustainability reports of 50 independent random companies are selected as the sample. LÄS MER

  3. 3. An In-Depth study on the Utilization of Large Language Models for Test Case Generation

    Master-uppsats, Umeå universitet/Institutionen för datavetenskap

    Författare :Nicole Johnsson; [2024]
    Nyckelord :Large Language Models; Test Case Generation; Retrieval Augmented Generation; Machine Learning; Generative AI;

    Sammanfattning : This study investigates the utilization of Large Language Models for Test Case Generation. The study uses the Large Language model and Embedding model provided by Llama, specifically Llama2 of size 7B, to generate test cases given a defined input. LÄS MER

  4. 4. Self-Supervised Learning for Tabular Data: Analysing VIME and introducing Mix Encoder

    Kandidat-uppsats, Lunds universitet/Fysiska institutionen

    Författare :Max Svensson; [2024]
    Nyckelord :Machine Learning; Self-supervised learning; AI; Physics; Medicine; Physics and Astronomy;

    Sammanfattning : We introduce Mix Encoder, a novel self-supervised learning framework for deep tabular data models based on Mixup [1]. Mix Encoder uses linear interpolations of samples with associated pretext tasks to form useful pre-trained representations. LÄS MER

  5. 5. Exploring the Depth-Performance Trade-Off : Applying Torch Pruning to YOLOv8 Models for Semantic Segmentation Tasks

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Xinchen Wang; [2024]
    Nyckelord :Deep Learning; Semantic segmentation; Network optimization; Network pruning; Torch Pruning; YOLOv8; Network Depth; Djup lärning; Semantisk segmentering; Nätverksoptimering; Nätverksbeskärning; Fackelbeskärning; YOLOv8; Nätverksdjup;

    Sammanfattning : In order to comprehend the environments from different aspects, a large variety of computer vision methods are developed to detect objects, classify objects or even segment them semantically. Semantic segmentation is growing in significance due to its broad applications in fields such as robotics, environmental understanding for virtual or augmented reality, and autonomous driving. LÄS MER