Centrala begrepp
Reinforcement learning for anticipatory mesh refinement improves accuracy and efficiency.
Statistik
"The normalized error observation includes a user-defined parameter 𝛼, which can be considered as a form of a relative error threshold that will be used in the reward function to indicate which elements are to be refined or coarsened."
"For the thresholds in Eq. (20), the maximum error threshold 𝑒𝜏,max is computed identically to Eq. (16) while the minimum error threshold is computed as 𝑒𝜏,min = 𝑒𝛽 𝜏,max."
Citat
"Most importantly, this would allow for longer time intervals between mesh adaptation, which is particularly beneficial for compute architectures where bandwidth is the bottleneck."
"The primary purpose of this work is to evaluate the ability of RL and the proposed observation and reward formulations as methods of achieving more efficient anticipatory mesh refinement policies for complex nonlinear systems of equations."