Proceedings of the International Conference on Intelligent Systems and Digital Transformation (ICISD 2025)

DistillFed: Enhancing Personalized LLM Performance through Federated Learning

Authors
S. Prathap1, *, M. Thenmozhi2, Dusabimana Jean de Dieu3
1Department of Artifical Intelligence, SRM Institute of Science and Technology, Chennai, Kattankulathur, Tamilnadu, India
2Department of Networking and Communications, SRM Institute of Science and Technology, Chennai, Kattankulathur, Tamil Nadu, India
3MSc in IoT-WISENET, ACEIOT-UR-CST, Kigali, Rwanda
*Corresponding author. Email: ps4363@srmist.edu.in
Corresponding Author
S. Prathap
Available Online 31 October 2025.
DOI
10.2991/978-94-6463-866-0_70How to use a DOI?
Keywords
Federated Learning; Differential Privacy; Personalized Large Language Models (LLMs); Knowledge Distillation; Parameter-Efficient Fine-Tuning (PEFT); LoRA (Low-Rank Adaptation)
Abstract

The implementation of large language models (LLMs) on edge devices encounters three major challenges: the requirement of high computational resources, threats to users data privacy, and degradation in performance resulting from model compression in environments with limited resources is a significant concern. To mitigate these issues, we introduce DistillFed, an innovative framework designed to tackle these challenges that integrates federated learning with chain-of-thought distillation to facilitate the development of efficient and personalized LLMs while preserving data privacy. Our method is divided into two stages. First, a global teacher model was collaboratively developed on decentralized devices via a differentially private federated learning protocol with secure aggregation and adaptive gradient clipping. In the second stage, each client device constructs a lightweight student model utilizing multi task learning, which concurrently predicts task labels and generates rationales. This approach effectively integrates the knowledge and reasoning mechanisms present in the global model. Notably, our rationale-based distillation achieved a 14.2% improvement in accuracy compared to the baseline federated fine-tuning when addressing highly non-IID data distributions in a client-side model. Moreover, this framework reduces the communication overhead of traditional federated learning by transmit ting compressed rationales. DistillFed offers a practical solution for the implementation of privacy-preserving personalized large language models (LLMs) in edge computing contexts, especially within sectors such as healthcare and finance, where the sensitivity of data and the efficiency of models are critically important.

Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the International Conference on Intelligent Systems and Digital Transformation (ICISD 2025)
Series
Atlantis Highlights in Intelligent Systems
Publication Date
31 October 2025
ISBN
978-94-6463-866-0
ISSN
2589-4919
DOI
10.2991/978-94-6463-866-0_70How to use a DOI?
Copyright
© 2025 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - S. Prathap
AU  - M. Thenmozhi
AU  - Dusabimana Jean de Dieu
PY  - 2025
DA  - 2025/10/31
TI  - DistillFed: Enhancing Personalized LLM Performance through Federated Learning
BT  - Proceedings of the International Conference on Intelligent Systems and Digital Transformation (ICISD 2025)
PB  - Atlantis Press
SP  - 861
EP  - 874
SN  - 2589-4919
UR  - https://doi.org/10.2991/978-94-6463-866-0_70
DO  - 10.2991/978-94-6463-866-0_70
ID  - Prathap2025
ER  -