The Learning-to-Cache (L2C) method accelerates the inference of diffusion transformers by dynamically learning which layers can be cached and reused across timesteps without retraining, leading to significant speedups with minimal impact on image quality.
본 논문에서는 연합 학습에서 중앙 서버의 순차적 의사 결정 문제로서 클라이언트 레벨 공정성을 달성하기 위한 새로운 프레임워크인 AAggFF를 제안합니다.
This paper proposes AAggFF, a novel framework leveraging online convex optimization (OCO) to improve client-level fairness in federated learning by adaptively adjusting mixing coefficients based on client performance feedback.
Deep neural networks (DNNs) may develop abstract internal representations, termed "symbols," which can be extracted and used to understand, improve, and safeguard DNN decision-making.
본 논문에서는 기존의 특징 중요도 측정 방식을 넘어, 특징별 시너지 정보, 중복 정보, 상호 정보를 활용하여 데이터 해석과 특징 선택을 동시에 수행하는 새로운 패러다임인 PIDF(Partial Information Decomposition of Features)를 제시합니다.
This paper introduces Partial Information Decomposition of Features (PIDF), a novel method that leverages information-theoretic concepts of synergy and redundancy to provide a more comprehensive understanding of feature importance for both data interpretability and feature selection.
This paper introduces VRCP, a novel framework that leverages conformal prediction and neural network verification to construct prediction sets that maintain coverage guarantees for machine learning models, even in the presence of adversarial attacks.
本研究探討大型語言模型 (LLM) 如何透過類似於上下文對齊的自我修正過程來提升自身能力,並證明 Transformer 模型能夠利用自我修正樣本,在上下文情境下學習並生成更優質的回應。
대규모 언어 모델(LLM)은 문맥 내 정렬을 통해 자기 교정 능력을 발휘할 수 있으며, 특히 비평의 정확도가 높을수록 자기 교정의 성능이 향상된다.
Large language models (LLMs) can leverage self-correction to improve their alignment and performance on tasks like mitigating social bias and defending against jailbreak attacks, particularly when equipped with accurate self-criticism mechanisms.