Detecting adversarial samples in intrusion detection systems using machine learning models

Machine learning and knowledge control systems
Authors:
Abstract:

The problem of protecting machine learning models used in intrusion detection systems from adversarial attacks is considered. Possible methods of protection against adversarial samples based on data anomaly detectors and an autoencoder are analyzed. The results of an experimental study of protective mechanisms that demonstrated high efficiency in detecting distorting data using a Random Forest model are presented.