FIMP-HGA는 매칭 단계에서 KM-M 알고리즘을 사용하여 효율성을 높이고, 파티셔닝 단계에서 하이브리드 유전 알고리즘과 엘리트 전략을 통해 해의 품질을 향상시킨다.
Tensorized Ant Colony Optimization (TensorACO) leverages GPU acceleration and tensor-based computational methods to significantly improve the performance of Ant Colony Optimization (ACO) in solving large-scale Traveling Salesman Problems (TSPs).
Gauge transformation (GT) is a simple yet effective technique that can be seamlessly integrated into reinforcement learning (RL) models to enable continuous exploration and improvement of solutions for combinatorial optimization problems (COPs).
The algorithm computes a k-edge-connected spanning subgraph with cost no greater than the optimal (k+10)-edge-connected spanning subgraph.
Simple randomized parameter choices and elementary greedy heuristics can outperform complex algorithms and costly parameter tuning for the target set selection problem.
Every n-vertex triangulation has a connected dominating set of size at most 10n/21.
The proposed DR-ALNS method leverages Deep Reinforcement Learning to dynamically select operators, adjust destroy severity, and control the acceptance criterion within the Adaptive Large Neighborhood Search (ALNS) algorithm, leading to more effective solutions for combinatorial optimization problems.
The core message of this paper is that by decoupling the routing and purchasing decisions and leveraging deep reinforcement learning, the authors propose a novel approach that can efficiently construct high-quality solutions for traveling purchaser problems, significantly outperforming well-established heuristic methods.
The core message of this paper is to study the problem of adversarial combinatorial bandits with switching costs, derive lower bounds for the minimax regret, and propose algorithms that approximately meet these lower bounds under both bandit feedback and semi-bandit feedback settings.
The authors propose a two-stage graph pointer network (GPN) model that can efficiently solve large-scale quadratic assignment problems (QAP) using reinforcement learning.