Federated learning has been spotlighted as a way to train neural networks
using distributed data with no need for individual nodes to share data.
Unfortunately, it has also been shown that adversaries may be able to extract
local data contents off model parameters transmitted during federated learning.
A recent solution based on the secure aggregation primitive enabled
privacy-preserving federated learning, but at the expense of significant extra
communication/computational resources. In this paper, we propose a
low-complexity scheme that provides data privacy using substantially reduced
communication/computational resources relative to the existing secure solution.
The key idea behind the suggested scheme is to design the topology of
secret-sharing nodes as a sparse random graph instead of the complete graph
corresponding to the existing solution. We first obtain the necessary and
sufficient condition on the graph to guarantee both reliability and privacy. We
then suggest using the ErdH{o}s-R’enyi graph in particular and provide
theoretical guarantees on the reliability/privacy of the proposed scheme.
Through extensive real-world experiments, we demonstrate that our scheme, using
only $20 sim 30%$ of the resources required in the conventional scheme,
maintains virtually the same levels of reliability and data privacy in
practical federated learning systems.

By admin