Sökning: "Modellkomprimering"

Hittade 5 uppsatser innehållade ordet Modellkomprimering.

  1. 1. Using Quantization and Serialization to Improve AI Super-Resolution Inference Time on Cloud Platform

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Wai-Hong Anton Fu; [2023]
    Nyckelord :;

    Sammanfattning : AI Super-Resolution is a branch of Artificial Intelligence where the goal is to take a low-resolution image and upscale it into a high-resolution image. These models are usually deep learning models based on Convolutional Neural Networks (CNN) and/or transformers. LÄS MER

  2. 2. Representation and Efficient Computation of Sparse Matrix for Neural Networks in Customized Hardware

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Lihao Yan; [2022]
    Nyckelord :Convolutional neural networks; Sparse matrix representation; Model compression; Algorithm-hardware co-design; AlexNet; Konvolutionella neurala nätverk; Sparsam matrisrepresentation; Modellkomprimering; Algoritm-hårdvarusamdesign; AlexNet;

    Sammanfattning : Deep Neural Networks are widely applied to various kinds of fields nowadays. However, hundreds of thousands of neurons in each layer result in intensive memory storage requirement and a massive number of operations, making it difficult to employ deep neural networks on mobile devices where the hardware resources are limited. LÄS MER

  3. 3. Exploration of Knowledge Distillation Methods on Transformer Language Models for Sentiment Analysis

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Haonan Liu; [2022]
    Nyckelord :Natural Language Processing; Sentiment Analysis; Language Model; Transformers; Knowledge Distillation; Behandling av Naturligt Språk; Analys av Känslor; Språkmodell; Omvandlare; Kunskapsdestillation;

    Sammanfattning : Despite the outstanding performances of the large Transformer-based language models, it proposes a challenge to compress the models and put them into the industrial environment. This degree project explores model compression methods called knowledge distillation in the sentiment classification task on Transformer models. LÄS MER

  4. 4. Distributed Intelligence for Multi-Robot Environment : Model Compression for Mobile Devices with Constrained Computing Resources

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Timotheos Souroulla; [2021]
    Nyckelord :Human-Robot-Collaboration HRC ; Model Compression; Pruning; Knowledge Distillation; Object Detection; Mänskligt-Robot-Samarbete; Modellkomprimering; Beskärning; Kunskapsdestillation; Objektavkänning;

    Sammanfattning : Human-Robot Collaboration (HRC), where both humans and robots work in the same environment simultaneously, is an emerging field and has increased massively during the past decade. For this collaboration to be feasible and safe, robots need to perform a proper safety analysis to avoid hazardous situations. LÄS MER

  5. 5. Classifying hand-drawn documents in mobile settings, using transfer learning and model compression

    Master-uppsats, KTH/Skolan för datavetenskap och kommunikation (CSC)

    Författare :Axel Riese; [2017]
    Nyckelord :computer vision; deep learning; model compression; transfer learning;

    Sammanfattning : In recent years, the state-of-the-art in computer vision has improved immensely due to increased use of convolutional neural networks (CNN). However, the best-performing models are typically complex and too slow or too large for mobile use. LÄS MER