A protection method for the global model of the federated learning systems based on a trust model

Machine learning and knowledge control systems
Authors:
Abstract:

The paper reviews the problem of ensuring the security of a global model in the federated learning systems. The proposed protection method, based on data verification using a trusted group of nodes, ensures that only correct updates are considered during the aggregation of the global model. As a result of experimental study, it is demonstrated that the developed method ensures high speed of identification and isolation of adversarial clients implementing label substitution and noise imposition