Core Concepts
PCBM outperforms CBM in generalization.
Abstract
The article discusses the Concept Bottleneck Model (CBM) and its partial variant, PCBM. It explains how PCBM improves generalization by using partially observed concepts. The theoretical behavior of the Bayesian generalization error in PCBM is analyzed, showing that it outperforms CBM. The RLCT of PCBM is derived with a three-layered linear architecture, providing an upper bound for the Bayesian generalization error. The study also considers the impact of data types on the results and potential applications to transfer learning.
Stats
Published in Transactions on Machine Learning Research (MM/YYYY)
Reviewed on OpenReview: https://openreview.net/forum?id=XXXX
Neural networks widely applied in research and practical areas (Goodfellow et al., 2016; Dong et al., 2021)
RLCTs studied for various singular models like mixture models, neural networks, Boltzmann machines, etc.
RLCT of CBM clarified for a three-layered linear architecture (Hayashi & Sawada, 2023)
Quotes
"PCBM outperforms the original CBM in terms of generalization." - Li et al., 2022
"The structure of partially observed concepts decreases the Bayesian generalization error compared with that of CBM." - Sawada & Nakamura, 2022