Learning and Interpreting Responsibility Allocation in Multi-Agent Interactions from Data Using Differentiable Optimization and Control Barrier Functions
This paper proposes a novel, data-driven method for quantifying and interpreting responsibility allocation in multi-agent interactions, using control barrier functions and differentiable optimization to learn how agents prioritize safety constraints based on their willingness to deviate from desired behaviors.