Protecting neural network models from privacy violation threats in federated learning using optimization methods
Authors:
Abstract:
The paper is devoted to an approach to counter threats of privacy violations in federated learning. The approach is based on optimization methods to transform the weights of local neural network models and create new weights for transmission to the joint gradient descent node, which, in turn, allows to prevent the interception of local model weights by an attacker. Experimental studies have confirmed the effectiveness of the developed approach.