The authors construct efficient interactive proof systems that enable a verifier to check the results of an untrusted learner for various classes of Boolean functions, including heavy Fourier characters, AC0[2] circuits, and k-juntas, while using significantly fewer samples than the learner.
This work proposes the first learning-based algorithms that optimize both the locations and values of the non-zero entries in sketching matrices, leading to significant improvements in accuracy and efficiency over classical sketching techniques and previous learning-based approaches.
Extreme value theory provides an effective framework to model and predict the worst-case convergence times of machine learning algorithms during both the training and inference stages.
The paper proposes a unified neural model, called σ-PCA, that can learn both linear and nonlinear PCA as single-layer autoencoders. The model allows nonlinear PCA to learn the first rotation that reduces dimensionality and orders by variances, in addition to the second rotation that maximizes statistical independence, eliminating the subspace rotational indeterminacy.
The core message of this article is to devise a new Monte Carlo Tree Search algorithm, called Thompson Sampling Decision Trees (TSDT), that can produce optimal Decision Trees in an online setting, and to provide strong convergence guarantees for this algorithm.
The 1 Nearest Neighbor (1NN) classifier can achieve 100% robust accuracy on both training and test sets under reasonable assumptions, outperforming state-of-the-art adversarial training methods.
Natural Learning (NL) is a novel prototype-based machine learning algorithm that elevates the explainability and interpretability of classification models to an extreme level. NL discovers sparse prototypes that serve as human-friendly decision rules, enabling simple and intuitive explanations of its predictions.
A lightweight inference scheme specifically designed for deep neural networks trained using the Forward-Forward algorithm, which can significantly reduce the computational complexity of inference while maintaining comparable classification performance.
An adaptive algorithm named Xenovert can dynamically partition a continuous input space into multiple uniform intervals, effectively mapping a source distribution to a shifted target distribution. This enables downstream machine learning models to adapt to drastic distribution shifts without retraining.
The core message of this paper is to propose a new anomaly detection model that fuses dictionary learning and one-class support vector machines (OC-SVM) to improve unsupervised anomaly detection performance.