Core Concepts
最適な条件を満たす変換を学習する新しい手法が提案された。
Abstract
この論文では、データ表現の質と条件数を明示的に制御する新しい疎化変換モデルが紹介されています。これは、既存のアプローチと比較して数値実験において優れた性能を示すことが確認されました。さらに、交互最小化アルゴリズムや効率的な数値最適化手法についても詳細に分析されています。実験結果では、提案手法が画像データでの表現エラーを低減する点で優れており、他の既存手法よりも良好な結果を示しています。
Stats
κ ≤ er + √er - 1 where r = - log det (W ∗) + c∥W ∗∥2F - n/2 - n/2 log(2c).
The penalty strategy maintains a trade-off between the conditioning and representation quality using additional penalty functions to control the condition number of the optimal transform.
The objective follows that of (4) where the regularization term was added to control the distance between the noisy and clean matrices through the β parameter.
We set C = 1.15 as per [5].
All methods are initialized with the same orthogonal transformation D ⊗ D, where D is the Discrete Cosine Transform matrix of size √n × √n and ⊗ is the Kronecker product.
Quotes
"Unlike the existing approaches from the literature, in our paper, we consider a new sparsifying transform model that enforces explicit control over the data representation quality and the condition number of the learned transforms."
"We confirm through numerical experiments that our model presents better numerical behavior than the state-of-the-art."
"In all experiments, we choose to show two representation errors, regular and normalized."
"Our experiments include three methods: our proposed method from Algorithm 1, bresler method from [14], and Procrustes-based orthogonal transform learning method which we will call Ortho from now on."