核心概念
Gradient-based decision tree ensembles offer flexibility and outperform deep learning methods for tabular data analysis.
摘要
1. Abstract
Introduces GRANDE, a novel approach for learning decision tree ensembles using gradient descent.
Combines axis-aligned splits with gradient-based optimization.
Outperforms existing methods on classification datasets.
2. Introduction
Highlights challenges of tabular data and the need for effective methods.
Discusses the advantages of end-to-end gradient-based training.
3. Data Extraction
"We conducted an extensive evaluation on a predefined benchmark with 19 classification datasets."
"Our method outperforms existing gradient-boosting and deep learning frameworks on most datasets."
4. Background
Describes GradTree's formulation as arithmetic functions based on addition and multiplication.
Introduces dense DT representation to support gradient-based optimization.
5. GRANDE Approach
Extends GradTree to tree ensembles using softsign as a differentiable split function.
Proposes instance-wise weighting for improved performance and interpretability.
6. Experimental Evaluation
Demonstrates superior performance of GRANDE over XGBoost, CatBoost, and NODE on various datasets.
7. Related Work
Compares GRANDE with existing tree-based, DL, and hybrid methods in tabular data analysis.
統計資料
"We conducted an extensive evaluation on a predefined benchmark with 19 classification datasets."
"Our method outperforms existing gradient-boosting and deep learning frameworks on most datasets."
引述
No striking quotes found in the content.