Wu, J., Chen, S., Yang, Y., Li, Y., Hou, S., Jing, R., Wang, Z., Chen, W., & Tian, Z. (2024). FedDTPT: Federated Discrete and Transferable Prompt Tuning for Black-Box Large Language Models. arXiv preprint arXiv:2411.00985.
This paper introduces FedDTPT, a novel federated learning framework designed to address the challenges of privacy and efficiency in fine-tuning large language models (LLMs) for specific downstream tasks. The research aims to enable the learning of transferable and interpretable prompts while safeguarding both the privacy of the server's model parameters and the client's data.
FedDTPT employs a token-level discrete prompt tuning strategy on the client side, utilizing a feedback loop based on prediction accuracy to drive gradient-free prompt optimization through the MLM API. On the server side, an attention mechanism based on semantic similarity filters prompt tokens from all clients, enhanced by embedding distance elbow detection and DBSCAN clustering for improved selection.
Experimental results demonstrate that FedDTPT outperforms state-of-the-art methods in terms of accuracy, communication overhead, and robustness to non-iid data in a black-box setting. The optimized prompts also exhibit transferability, allowing their application to other LLMs.
FedDTPT offers a practical and effective solution for privacy-preserving and efficient fine-tuning of LLMs in federated learning scenarios. The use of discrete and transferable prompts addresses limitations associated with continuous prompts, enabling wider applicability and knowledge sharing among clients.
This research contributes significantly to the field of federated learning by introducing a novel approach for prompt tuning that prioritizes both privacy and efficiency. The proposed framework has the potential to facilitate collaborative LLM training across multiple devices while mitigating privacy concerns and reducing computational demands.
While FedDTPT demonstrates promising results, further exploration is needed to investigate its performance on a wider range of downstream tasks and with larger LLMs. Additionally, future research could explore the integration of more sophisticated clustering algorithms and the development of adaptive strategies for prompt length optimization.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Jiaqi Wu, Si... at arxiv.org 11-05-2024
https://arxiv.org/pdf/2411.00985.pdfDeeper Inquiries