ENGLISH
La vitrine de diffusion des publications et contributions des chercheurs de l'ÉTS
RECHERCHER

Fedchallenger: A robust challenge-response and aggregation strategy to defend poisoning attacks in federated learning

Moyeen, M. A., Kaur, Kuljeet, Agarwal, Anjali, Manzano, S. Ricardo, Zaman, Marzia et Goel, Nishith. 2025. « Fedchallenger: A robust challenge-response and aggregation strategy to defend poisoning attacks in federated learning ». IEEE Access.
(Sous presse)

[thumbnail of Kaur-K-2025-31489.pdf]
Prévisualisation
PDF
Kaur-K-2025-31489.pdf - Version publiée
Licence d'utilisation : Creative Commons CC BY.

Télécharger (2MB) | Prévisualisation

Résumé

Growing data privacy concerns in smart applications have spurred the development of Federated Learning (FL), a novel approach enabling heterogeneous clients to jointly train a global model without exchanging private data. However, FL faces significant challenges in aggregating model updates from different client devices, as malicious participants can poison the data and model updates to corrupt the global model. To enhance the global model’s accuracy, many state-of-the-art defence strategies in FL rely on aggregation-based security mechanisms. However, the global model can be more accurate if an attacker is excluded from the training. Therefore, this research proposes a dual-layer defence mechanism called FedChallenger to detect and prevent malicious client participation in the FL training process. The defence mechanism incorporates zero-trust challenge-response-based trusted exchange in the first layer, whereas, in the second layer, it uses a variant of the Trimmed-Mean aggregation strategy that uses pairwise cosine similarity along with Median Absolute Deviation (MAD) for aggregation to mitigate the malicious model parameters.Extensive evaluation using MNIST, FMNIST, EMNIST, and CIFAR-10 datasets demonstrates that the proposed FedChallenger outperforms state-of-the-art approaches, including Stake, Shap, Cluster, Trimmed-Mean, Krum, FedAvg, and DUEL, across both attack and non-attack scenarios. Under adversarial conditions with model and data poisoning attacks, FedChallenger achieves a 3-10% improvement in global model accuracy over the closest contender, along with 1.1-2.2 times faster convergence. Additionally, it attains a 2-3% higher F1-Score than the best-competing technique while maintaining robustness against varying attack intensities across different dataset complexities.

Type de document: Article publié dans une revue, révisé par les pairs
Professeur:
Professeur
Kaur, Kuljeet
Affiliation: Génie électrique
Date de dépôt: 21 août 2025 14:19
Dernière modification: 24 sept. 2025 22:40
URI: https://espace2.etsmtl.ca/id/eprint/31489

Actions (Authentification requise)

Dernière vérification avant le dépôt Dernière vérification avant le dépôt