Adversarial attacks against a machine learning based intrusion detection system
The paper analyzes relevant sources in the field of implementing modern adversarial attacks against a network intrusion detection system with an analyzer based on machine learning methods. The process of building such a system is summarized; common errors made by developers at each stage, which can be exploited by attackers when implementing various attacks, are indicated. A classification of adversarial attacks against machine learning models is given, and the most well-known adversarial attacks against neural networks and ensembles of decision trees are analyzed. The existing limitations in the use of adversarial attacks against intrusion detection models of the «random forest» type are noted; poisoning and evasion attacks against the object of study are implemented in practice. Possible defense strategies are considered, and the effectiveness of the most common method, adversarial learning, is experimentally assessed. It is concluded that there are no guarantees to ensure the robustness of the used machine learning model to adversarial attacks and there is a need to search for protective strategies that provide such guarantees.