Sökning: "Svenska språkmodeller"

Visar resultat 1 - 5 av 19 uppsatser innehållade orden Svenska språkmodeller.

  1. 1. Speech Classification using Acoustic embedding and Large Language Models Applied on Alzheimer’s Disease Prediction Task

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Maryam Kheirkhahzadeh; [2023]
    Nyckelord :Speech classification; Alzheimer’s disease detection; GPT-3; BERT; Text embedding; Dementia; wav2vec2.0; Klassificering av tal; detektion av Alzheimer’s sjukdom; GPT-3; BERT; textinbäddning; demens; wav2vec2.0;

    Sammanfattning : Alzheimer’s sjukdom är en neurodegenerativ sjukdom som leder till demens. Den kan börja tyst i de tidiga stadierna och fortsätta under åren till en allvarlig och obotlig fas. Språkstörningar uppstår ofta som ett av de tidiga symptomen och kan till slut leda till fullständig mutism i de avancerade stadierna av sjukdomen. LÄS MER

  2. 2. Context-aware Swedish Lexical Simplification : Using pre-trained language models to propose contextually fitting synonyms

    Kandidat-uppsats, Linköpings universitet/Institutionen för datavetenskap

    Författare :Emil Graichen; [2023]
    Nyckelord :automatic text simplification; lexical simplification; Swedish; BERT; GPT-3; evaluation dataset; synonymy;

    Sammanfattning : This thesis presents the development and evaluation of context-aware Lexical Simplification (LS) systems for the Swedish language. In total three versions of LS models, LäsBERT, LäsBERT-baseline, and LäsGPT, were created and evaluated on a newly constructed Swedish LS evaluation dataset. LÄS MER

  3. 3. Document Expansion for Swedish Information Retrieval Systems

    Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Tobias Hagström; [2023]
    Nyckelord :Information retrieval; Natural language processing; Deep learning; informationssökningssystem; språkteknologi; djupinlärning;

    Sammanfattning : Information retrieval systems have come to change how users interact with computerized systems and locate information. A major challenge when designing these systems is how to handle the vocabulary mismatch problem, i.e. that users, when formulating queries, pick different words than those present in the relevant documents that should be retrieved. LÄS MER

  4. 4. A Prompting Framework for Natural Language Processing in the Medical Field : Assessing the Potential of Large Language Models for Swedish Healthcare

    Master-uppsats, KTH/Medicinteknik och hälsosystem

    Författare :Anim Mondal; [2023]
    Nyckelord :Healthcare; NLP; GPT; framework; medical; AI; Hälsovård; NLP; GPT; ramverk; medicin; AI;

    Sammanfattning : The increasing digitisation of healthcare through the use of technology and artificial intelligence has affected the medical field in a multitude of ways. Generative Pre-trained Transformers (GPTs) is a collection of language models that have been trained on an extensive data set to generate human-like text and have been shown to achieve a strong understanding of natural language. LÄS MER

  5. 5. Exploring Cross-Lingual Transfer Learning for Swedish Named Entity Recognition : Fine-tuning of English and Multilingual Pre-trained Models

    Kandidat-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)

    Författare :Daniel Lai Wikström; Axel Sparr; [2023]
    Nyckelord :NER; Cross-lingual transfer; Transformer; BERT; Deep Learning; namnigenkänning; NER; multilingvistisk överföring; Transformer; BERT; deep learning;

    Sammanfattning : Named Entity Recognition (NER) is a critical task in Natural Language Processing (NLP), and recent advancements in language model pre-training have significantly improved its performance. However, this improvement is not universally applicable due to a lack of large pre-training datasets or computational budget for smaller languages. LÄS MER