A General Approach to Inaudible Adversarial Perturbations in a Black-box Setting

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Författare: Johan Sörell; [2021]

Nyckelord: ;

Sammanfattning: Deep learning is currently being deployed in many speech recognition systems. While these systems can achieve state-of-the-art performance, they are known to be susceptible to adversarial perturbations. These are minor perturbations to the input data, crafted specifically to cause erroneous behavior from the system. Some previous work have put effort into placing the perturbations in accordance with psychoacoustics, meaning placing the perturbations in areas of a signal that are perceptually limited for humans. In this work, a general method for optimizing perturbations according to psychoacoustics is presented. The formulation allows for a non-gradient based optimization strategy to be implemented. Two greedy optimization algorithms are developed using the proposed method. Inaudible perturbations are shown to be ineffective, which conform with the current academic understanding. However, when allowing the perturbations to be 18 dB stronger than the psychoacoustical defined perceptual limit, targeted success-rate of 64% and untargeted success-rate of 87% is achieved on a keyword spotting task. 

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)