Core Concepts
This research aims to increase the compression ratio and efficiency of the lossless component of the JPEG-XL image compression standard through algorithmic refinements and the integration of novel techniques.
Abstract
This research focuses on improving the lossless compression capabilities of the JPEG-XL image compression standard. The study begins by introducing the fundamental concepts of image compression, with a focus on lossless techniques. It then provides an overview of the JPEG-XL standard and the current research in this area.
The main objectives of this research are:
To develop a comprehensive benchmark application for evaluating and comparing the lossless compression performance of various algorithms, including JPEG-XL.
To identify potential areas of improvement in the JPEG-XL lossless compression algorithm and implement modifications, such as using different prediction methods, to enhance the compression ratio.
To compare the performance of the modified JPEG-XL lossless compression algorithm against the original implementation using the developed benchmark.
The research methodology involves iterative development and testing of the benchmark application and the modified JPEG-XL lossless compression algorithm. The benchmark application is designed to be modular and extensible, allowing for the inclusion of additional compression algorithms in the future. The modifications to the JPEG-XL algorithm focus on the prediction stage, where three different prediction methods (Gradient-Adjusted Predictor, Gradient Edge Detection, and a modified Median Edge Detection) are implemented and tested.
The results show that while the modified JPEG-XL algorithms do not outperform the original on average, they achieve significant improvements in compression ratio for a subset of images characterized by areas of smooth color and sharp edges. The Gradient-Adjusted Predictor is found to be the most effective of the three modified predictors in this scenario.
The discussion covers the threats to validity, implications, limitations, and generalizability of the research results. The study concludes with a summary of the key findings and suggestions for future work, including the optimization of the context model to better accommodate the new prediction methods.
Stats
The average compressed file size for the Kodak image dataset is:
Original MED: 460495.31 bytes
Gradient Edge Detection (GED): 462468.99 bytes
Gradient-Adjusted Predictor (GAP): 461038.46 bytes
The average decrease in compression size for the DIV2K and CLIC image datasets containing images with sharp edges and flat areas is:
MED: 2383.60 bytes
GED: 1805.90 bytes
GAP: 2990.82 bytes
Quotes
"The gradient-adjusted prediction algorithm introduced in CALIC is demonstrated to outperform the median edge detection and gradient edge detection predictors when substituted for the current gradient predictor in JPEG XL."
"Overall, using the gradient-adjusted predictor has led to improvements for images which contain areas of flat colour along with areas of strong edges."