แนวคิดหลัก
This research proposes a novel Spatial-Temporal Relative Transformer Network (ST-RTR) for skeleton-based human action recognition, leveraging quantum-inspired computing principles to enhance performance and overcome limitations of existing methods like ST-GCN.
สถิติ
The ST-RTR model boosted CS and CV by 2.11 % and 1.45% on NTU RGB+D 60.
The ST-RTR model boosted CS and CV by 1.25% and 1.05% on NTU RGB+D 120.
On UAV-Human datasets, accuracy improved by 2.54%.
คำพูด
"This research presents a new mechanism, the Spatial-Temporal Relative Transformer (ST-RTR), to overcome the limitations of existing Graph Convolutional Networks (GCNs), specifically ST-GCNs, for skeleton-based HAR."
"The quantum ST-RTR utilizes a modified relative transformer module to address issues such as fixed human body graph topology, limited spatial and temporal convolution, and overlooking kinematic similarities between opposing body parts."