Sökning: "Coreference"
Visar resultat 1 - 5 av 10 uppsatser innehållade ordet Coreference.
1. Domain-specific knowledge graph construction from Swedish and English news articles
Master-uppsats, Uppsala universitet/Institutionen för lingvistik och filologiSammanfattning : In the current age of new textual information emerging constantly, there is a challenge related to processing and structuring it in some ways. Moreover, the information is often expressed in many different languages, but the discourse tends to be dominated by English, which may lead to overseeing important, specific knowledge in less well-resourced languages. LÄS MER
2. Coreference Resolution for Swedish
Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)Sammanfattning : This report explores possible avenues for developing coreference resolution methods for Swedish. Coreference resolution is an important topic within natural language processing, as it is used as a preprocessing step in various information extraction tasks. LÄS MER
3. Icke-absolut ablativus absolutus: Om en avvikande användning av ablativus absolutus hos Caesar
Kandidat-uppsats, Göteborgs universitet/Institutionen för språk och litteraturerSammanfattning : The ablative absolutes are not always as absolute as their name suggests. When a coreference exists between the ablative absolute and another noun phrase in the clause, it is non-absolute. This student thesis analyzes the non-absolute ablative absolutes in Caesar’s Bellum Gallicum and Bellum civile. LÄS MER
4. Prerequisites for Extracting Entity Relations from Swedish Texts
Kandidat-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)Sammanfattning : Natural language processing (NLP) is a vibrant area of research with many practical applications today like sentiment analyses, text labeling, questioning an- swering, machine translation and automatic text summarizing. At the moment, research is mainly focused on the English language, although many other lan- guages are trying to catch up. LÄS MER
5. Using Bidirectional Encoder Representations from Transformers for Conversational Machine Comprehension
Master-uppsats, KTH/Skolan för elektroteknik och datavetenskap (EECS)Sammanfattning : Bidirectional Encoder Representations from Transformers (BERT) is a recently proposed language representation model, designed to pre-train deep bidirectional representations, with the goal of extracting context-sensitive features from an input text [1]. One of the challenging problems in the field of Natural Language Processing is Conversational Machine Comprehension (CMC). LÄS MER