核心概念
The author developed a model using NARX neural network to estimate audiovisual quality in videoconferencing services, outperforming other methods in terms of mean square error and correlation coefficient.
要約
The paper focuses on developing a parametric model for estimating audiovisual quality in real-time videoconferencing services. It introduces the NARX recurrent neural network to predict perceived quality based on bitstream parameters. The study compares the proposed model with existing machine learning methods, showcasing superior performance. The research highlights the importance of considering QoS parameters for accurate quality estimation in audiovisual services. By utilizing deep learning algorithms, service providers can optimize user experience and adjust parameters in real time.
統計
Our model outperforms state-of-the-art methods with MSE=0.150 and R=0.931.
ITU-T P.1201 model evaluates audiovisual quality separately on a five-point MOS scale.
Goudarzi et al. achieved Pearson correlation coefficient of 0.93 and RMSE of 0.237 under various test conditions.
Demirbilek et al. extended Goudarzi's work for videoconferencing using decision trees, random forest, and MLP algorithms.
Nine important parameters were selected from the INRS Bitstream Audiovisual Dataset for performance evaluation.
引用
"Services like videoconferencing are sensitive to network conditions."
"Our model uses NARX recurrent neural network to estimate perceived quality."
"The proposed model outperforms existing methods in terms of MSE and correlation coefficient."