Latest issues
- 2025, Issue 3
- 2025, Issue Спецвыпуск
- 2025, Issue 2
- 2025, Issue 1
Articles from section "Machine learning and knowledge control systems"
A protection method for the global model of the federated learning systems based on a trust model
- Year: 2024
- Issue: 4
- 0
- 99
- Pages: 94-108
Authorship identification and verification using machine and deep learning methods
- Year: 2024
- Issue: 2
- 0
- 107
- Pages: 178-193
Detection of artificially synthesized audio files using graph neural networks
- Year: 2024
- Issue: 2
- 0
- 112
- Pages: 169-177
Automatic synthesis of 3D gas turbine blades shapes using machine learning
- Year: 2024
- Issue: 2
- 1
- 109
- Pages: 152-168
Intelligent mechanisms for extracting features of file modification in dynamic virus analysis
- Year: 2024
- Issue: 1
- 0
- 211
- Pages: 153-167
Protection of the machine learning models from the training data membership inference
- Year: 2024
- Issue: 1
- 1
- 207
- Pages: 142-152
PROTECTION AGAINST ADVERSARIAL ATTACKS ON IMAGE RECOGNITION SYSTEMS USING AN AUTOENCODER
- Year: 2023
- Issue: 1
- 2
- 210
- Pages: 119-127
The hybrid method for evasion attacks detection in the machine learning systems
- Year: 2023
- Issue: 1
- 0
- 207
- Pages: 104-110
Application of genetic algorithm for selection of neyral network hyperparameters
- Year: 2025
- Issue: 2
- 4
- 255
- Pages: 112-120
Query processing in Data Lake Management System based on a universal data model
- Year: 2025
- Issue: 2
- 0
- 226
- Pages: 96-111
Research of adversarial attacks on classical machine learning models in the context of network threat detection
- Year: 2025
- Issue: 3
- 12
- 346
- Pages: 147-164
From exploitation to protection: analysis of methods for defending against attacks on LLMS
- Year: 2025
- Issue: 3
- 11
- 331
- Pages: 110-120
Detecting adversarial samples in intrusion detection systems using machine learning models
- Year: 2025
- Issue: 1
- 7
- 295
- Pages: 59-68
From exploitation to protection: a deep dive into adversarial attacks on LLMS
- Year: 2025
- Issue: 1
- 6
- 312
- Pages: 43-58
Protecting neural network models from privacy violation threats in federated learning using optimization methods
- Year: 2025
- Issue: 1
- 2
- 304
- Pages: 21-29