Adversarial attacks on intrusion detection systems using LSTM classifier
Authors:
Abstract:
This paper discusses adversarial attacks on machine learning models and their classification. Methods for assessing the resistance of an LSTM classifier to adversarial attacks are investigated. JSMA and FGSM attacks, chosen due to the portability of adversarial examples between machine learning models, are discussed in detail. An attack of «poisoning» of the LSTM classifier is proposed. Methods of protection against the considered adversarial attacks are formulated