A semi-supervised approach to dialogue act classification using K-Means+HMM

Detta är en Master-uppsats från KTH/Skolan för datavetenskap och kommunikation (CSC)

Sammanfattning: Dialogue act (DA) classification is an important step in the process of developing dialog systems. DA classification is a problem usually solved by supervised machine learning (ML) approaches that all require hand labeled data. Since hand labeling data is a resource-intensive task, many have proposed to focus on unsupervised or semi-supervised ML approaches to solve the problem of DA classification. This master’s thesis explores a novel method for semi-supervised approach to DA classification: K-Means+HMM. The method combines K- Means and Hidden Markov Model (HMM) modeling in addition to abstracting away the words in the utterances to their part-of-speech (POS) tags and the utterances to their cluster labels produced by K-Means prior to HMM training. The focus are the following hypotheses: H1) incorporating context of the utterances leads to better results (HMM is a method specifically used for sequential data and thus incorporates context, while K-Means does not); H2) increasing the number of clusters in K-Means+HMM leads to better results; H3) increasing the number of examples of cluster labels and hand labeled DAs pairs in K-Means+HMM leads to better results (the examples of pairs are used to create the emission probabilities used to define the HMM). One of the conclusions is that K-Means performs better than K-Means+HMM (the result for K-Means measured with one-to-one accuracy is 35.0%, while the result for K-Means+HMM is 31.6%) given 14 clusters and one example pair. However, when the number of examples is increased to 15 the result is 40.5% for K-Means+HMM; the biggest improvement is when the number of examples is increased to 20 resulting in 44% one-to-one accuracy. That is, K-Means+HMM outperforms K-Means provided that a certain number of examples is given. Another conclusion is that the number of examples has a much larger impact on the results - compared to the number of clusters - thus perhaps concluding that the statement “there is no data like labeled data” holds. 

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)