Keeping tabs on GPT-SWE : Classifying toxic output from generative language models for Swedish text generation

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Sammanfattning: Disclaimer: This paper contains content that can be perceived as offensive or upsetting. Considerable progress has been made in Artificial intelligence (AI) and Natural language processing (NLP) in the last years. Neural language models (LM) like Generative pre-trained transformer 3 (GPT-3) show impressive results, generating high-quality text seemingly written by a human. Neural language models are already applied in society for example in creating chatbots or assisting with writing documents. As generative LMs are trained on large amounts of data from all kinds of sources, they can pick up toxic traits. GPT-3 has for instance been shown to generate text with social biases, racism, sexism and toxic language. Therefore, filtering for toxic content is necessary to safely deploy models like GPT-3. GPT-3 is trained on and can generate English text data, but similar models for smaller languages have recently emerged. GPT-SWE is a novel model based on the same technical principles as GPT-3, able to generate Swedish text. Much like GPT-3, GPT-SWE has issues with generating toxic text. A promising approach for addressing this problem is to train a separate toxicity classification model for classifying the generated text as either toxic or safe. However, there is a substantial need for more research on toxicity classification for lower resource languages and previous studies for the Swedish language are sparse. This study explores the use of toxicity classifiers to filter Swedish text generated from GPT-SWE. This is investigated by creating and annotating a small Swedish toxicity dataset which is used to fine-tune a Swedish BERT model. The best performing toxicity classifier created in this work cannot be considered useful in an applied scenario. Nevertheless, the results encourage continued studies on BERT models that are pre-trained and fine-tuned in Swedish to create toxicity classifiers. The results also highlight the importance of qualitative datasets for fine-tuning and demonstrate the difficulties of toxicity annotation. Furthermore, expert annotators, distinctive well-defined guidelines for annotation and fine-grained labels are recommended. The study also provides insights into the potential for active learning methods in creating datasets in languages with lower resources. Implications and potential solutions regarding toxicity in generative LMs are also discussed.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)