Temel Kavramlar
The article provides tools to obtain tight quadratic error bounds for floating-point algorithms that evaluate the hypotenuse function, expressed as a function of the unit round-off.
Özet
The article focuses on obtaining tight relative error bounds for floating-point algorithms that compute the hypotenuse function. It introduces a computer algebra-based approach to automate the error analysis of such algorithms, which aims to capture the correlations between the errors made at each step.
The key highlights and insights are:
Floating-point arithmetic is inherently inexact, and the impact of individual rounding errors on the final result can be catastrophic. Obtaining tight error bounds is important, especially when using low-precision formats.
The classical approach of propagating individual error bounds often leads to large overestimations of the real maximum error. This is because it does not account for errorless operations or the fact that the relative error bound is sharp only when the result is slightly above a power of 2.
The authors propose a computer algebra-based approach to obtain generic quadratic error bounds, which are expressed as a continuous function of the unit round-off. This approach aims to capture the correlations between the errors made at each step of the algorithm.
The authors illustrate their approach by analyzing several algorithms for computing the hypotenuse function, ranging from elementary to quite challenging. They show that their approach often yields tighter bounds than previous results.
The key technical aspects of the approach include: (i) a step-by-step analysis of the algorithm to construct a system of polynomial equations and inequalities, (ii) the use of polynomial optimization techniques to find the maximum relative error, and (iii) the exploitation of the triangular structure of the polynomial systems to speed up the optimization.