מושגי ליבה
Extreme value theory provides an effective framework to model and predict the worst-case convergence times of machine learning algorithms during both the training and inference stages.
תקציר
The paper leverages extreme value theory (EVT) to predict the worst-case convergence times (WCCT) of machine learning (ML) algorithms. This is an important non-functional property, as timing is critical for the availability and reliability of ML systems.
The key observations are:
- WCCT represent the extreme tail of execution times, so EVT is an ideal framework to model and analyze them.
- For a set of linear ML training algorithms, EVT achieves better accuracy in predicting WCCTs compared to the Bayesian factor method.
- For larger ML training algorithms and deep neural network inference, EVT is scalable and accurately predicts WCCTs in 57% and 75% of cases, respectively.
- EVT extrapolations are more accurate in longer horizons (e.g., predicting WCCT up to 10K queries vs. 500 queries).
- EVT may be more useful for the inference stage compared to the training stage.
The paper first provides background on extreme value theory and how it can be applied to model the statistics of worst-case convergence times. It then presents experiments on both micro-benchmarks and realistic ML algorithms to evaluate the feasibility, scalability, and usefulness of the EVT-based approach.
סטטיסטיקה
The paper does not provide any specific numerical data or statistics to support the key claims. The results are presented qualitatively through observations and percentages.
ציטוטים
The paper does not contain any direct quotes that are crucial to the key arguments.