FLiP enhances privacy in Federated Learning by employing local-global dataset distillation, adhering to the Principle of Least Privilege, which minimizes shared information to only what's essential for model training, thereby mitigating privacy risks.
DPFedBank is a novel framework designed to enable financial institutions to collaboratively train machine learning models without sharing raw data, ensuring robust data privacy through Local Differential Privacy (LDP) and comprehensive policy enforcement.
FheFL is a new federated learning algorithm that uses fully homomorphic encryption (FHE) and a novel aggregation scheme based on users' non-poisoning rates to address both privacy and security concerns in federated learning environments.
Upcycled-FL, a novel federated learning strategy that applies first-order approximation at every even round of model update, can significantly reduce information leakage and computational cost while maintaining model performance.
A privacy-preserving federated learning framework is proposed that uses random coding and system immersion tools to protect the privacy of local and global models without compromising model performance or system efficiency.
Federated learning and differential privacy can be combined to enable large-scale machine learning over distributed datasets while providing rigorous privacy guarantees.
Collaborative federated learning protocols must balance privacy guarantees and model accuracy to be mutually beneficial for all participants.
Proposing AerisAI for secure decentralized AI collaboration with differential privacy and homomorphic encryption.
ALI-DPFL algorithm improves performance in resource-constrained scenarios through adaptive local iterations.
Proposing a novel approach, Coupled Tensor Train Decomposition (CTT), for privacy-preserving federated learning networks.