Sökning: "deep multimodal fusion"

Visar resultat 1 - 5 av 11 uppsatser innehållade orden deep multimodal fusion.

  1. 1. Classifying femur fractures using federated learning

    Master-uppsats, Linköpings universitet/Statistik och maskininlärning

    Författare :Hong Zhang; [2024]
    Nyckelord :Atypical femur fracture; Federated Learning; Neural Network; Classification;

    Sammanfattning : The rarity and subtle radiographic features of atypical femoral fractures (AFF) make it difficult to distinguish radiologically from normal femoral fractures (NFF). Compared with NFF, AFF has subtle radiological features and is associated with the long-term use of bisphosphonates for the treatment of osteoporosis. LÄS MER

  2. 2. Robust Multi-Modal Fusion for 3D Object Detection : Using multiple sensors of different types to robustly detect, classify, and position objects in three dimensions.

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Viktor Kårefjärd; [2023]
    Nyckelord :Computer Vision; 3D Object Detection; Multi-Modal Fusion; Deep Learning; Datorseenden; 3D-objektdetektion; Multimodal fusion; Djupinlärning;

    Sammanfattning : The computer vision task of 3D object detection is fundamentally necessary for autonomous driving perception systems. These vehicles typically feature a multitude of sensors, such as cameras, radars, and light detection and ranging sensors. LÄS MER

  3. 3. A real-time Multi-modal fusion model for visible and infrared images : A light-weight and real-time CNN-based fusion model for visible and infrared images in surveillance

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Jin Wanqi; [2023]
    Nyckelord :Image fusion; deep learning; surveillance; CNN; real time; Bildfusion; djupinlärning; övervakning; CNN; realtid;

    Sammanfattning : Infrared images could highlight the semantic areas like pedestrians and be robust to luminance changes, while visible images provide abundant background details and good visual effects. Multi-modal image fusion for surveillance application aims to generate an informative fused images from two source images real-time, so as to facilitate surveillance observatory or object detection tasks. LÄS MER

  4. 4. Land Use/Land Cover Classification From Satellite Remote Sensing Images Over Urban Areas in Sweden : An Investigative Multiclass, Multimodal and Spectral Transformation, Deep Learning Semantic Image Segmentation Study

    Master-uppsats, Linköpings universitet/Institutionen för datavetenskap

    Författare :Oskar Aidantausta; Patrick Asman; [2023]
    Nyckelord :data fusion; deep learning; land use land cover classification; multiclass; multimodal; remote sensing; semantic segmentation; Sentinel satellite; spectral index; U-Net; Urban Atlas;

    Sammanfattning : Remote Sensing (RS) technology provides valuable information about Earth by enabling an overview of the planet from above, making it a much-needed resource for many applications. Given the abundance of RS data and continued urbanisation, there is a need for efficient approaches to leverage RS data and its unique characteristics for the assessment and management of urban areas. LÄS MER

  5. 5. Hierarchical Fusion Approaches for Enhancing Multimodal Emotion Recognition in Dialogue-Based Systems : A Systematic Study of Multimodal Emotion Recognition Fusion Strategy

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Yuqi Liu; [2023]
    Nyckelord :Multimodal; emotion recognition; hierarchical fusion; context modeling; attention mechanism; feature-level fusion; decision-level fusion; machine learning; deep learning; Multimodal; emotion recognition; hierarkisk fusion; kontextmodellering; uppmärksamhetsmekanism; funktionssammanfogning på nivån för egenskaper; beslutsnivås-funktionssammanfogning; maskininlärning; djupinlärning;

    Sammanfattning : Multimodal Emotion Recognition (MER) has gained increasing attention due to its exceptional performance. In this thesis, we evaluate feature-level fusion, decision-level fusion, and two proposed hierarchical fusion methods for MER systems using a dialogue-based dataset. LÄS MER