Decentralized Federated Learning (DFL) has been proposed as an advancement of centralized Machine Learning that can improve privacy concerns when creating data-driven systems from sensitive data. This benefit has been discussed from both academic [1] and industry [2] voices. However, research has identified and demonstrated that several attacks can be mounted, which lead to varying degrees of privacy impacts [3]. More relevant to this thesis topic are the following risks:
This thesis explores the integration of mix nets into DFL platforms. Addressing privacy concerns, the mix net ensures unlinkability and anonymity of user data, fortifying the security of communication channels. By obfuscating information flow, the mix net mitigates adversarial threats, enhancing the robustness of the DFL network against node failures. Additionally, the decentralized nature of mix nets reduces reliance on central servers, fostering a more resilient and trustworthy collaborative learning environment. This research contributes to advancing the field of privacy-preserving machine learning, offering insights into the symbiotic relationship between mix nets and DFL platforms for a more secure, private, and efficient federated learning ecosystem.
[1] S. Niknam, H. S. Dhillon and J. H. Reed: "Federated Learning for Wireless Communications: Motivation, Opportunities, and Challenges," in IEEE Communications Magazine, vol. 58, no. 6, pp. 46-51, June 2020
[2] B. McMahan, A. Thakurta, Google Research: "Federated Learning with Formal Differential Privacy Guarantees," Available Online, Last Visit March 12, 2024
[3] D. Pasquini, M. Raynal and C. Troncoso: "On the (In)security of Peer-to-Peer Decentralized Machine Learning," 2023 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 2023, pp. 418-436
[4] E. Hallaji, R. Razavi-Far, M. Saif, B. Wang, Q. Yang: "Decentralized Federated Learning: A Survey on Security and Privacy", Available Online, Last Visit March 12, 2024
Supervisors: Jan von der Assen
back to the main page