Fake Traffic Injection Attacks and Global-Local Inconsistency Detection for Securing Federated Learning-based Wireless Traffic Prediction
Federated learning offers a distributed framework to train a global model across multiple base stations without compromising the privacy of their local network data. However, the security aspects of FL-based distributed wireless systems, particularly in regression-based wireless traffic prediction (WTP) problems, remain inadequately investigated. This work introduces a novel fake traffic injection (FTI) attack to undermine the FL-based WTP system by injecting fabricated traffic distributions with minimal knowledge. It also proposes a defense mechanism, termed global-local inconsistency detection (GLID), which strategically removes abnormal model parameters to mitigate the impact of model poisoning attacks on WTP.