The impact of adversarial attacks on deep learning models

Machine learning and knowledge control systems
Authors:
Abstract:

This study presents a comparative analysis of the robustness of modern deep learning architectures against adversarial attacks. The study focuses on three representative models — EfficientNet-B0, MobileNetV2, and Vision Transformer (ViT-B16) — illustrating the evolution of architectures from convolutional networks to transformer-based approaches. The experimental evaluation was conducted on the ISIC‑2019 medical dataset containing dermoscopic images of skin lesions. To assess model robustness, a comprehensive set of digital and physical attacks was employed, including DeepFool, Carlini — Wagner, AutoAttack, Boundary Attack, and Patch Attack. The analysis demonstrated that all evaluated models exhibit significant vulnerability to targeted perturbations: optimizationbased attacks reduce classification accuracy by more than 55 percentage points, while physical attacks can disrupt model predictions even without access to internal parameters. The Vision Transformer (ViT-B16) showed relative resilience to minor perturbations, indicating the potential of attention-based architectures for improving robustness, though complete protection remains unattainable. The results emphasize the necessity of developing integrated approaches to adversarial robustness, encompassing architectural modifications, regularization techniques, and adaptive training — a direction of particular importance for critical domains such as medicine, transportation, and security systems.