The relation between MOS and pairwise comparisons and the importance of cross-content comparisons
Authors: Zerman, E., Hulusic, V., Valenzise, G., Mantiuk, R.K. and Dufaux, F.
Journal: IS and T International Symposium on Electronic Imaging Science and Technology
eISSN: 2470-1173
DOI: 10.2352/ISSN.2470-1173.2018.14.HVEI-517
Abstract:Subjective quality assessment is considered a reliable method for quality assessment of distorted stimuli for several multimedia applications. The experimental methods can be broadly categorized into those that rate and rank stimuli. Although ranking directly provides an order of stimuli rather than a continuous measure of quality, the experimental data can be converted using scaling methods into an interval scale, similar to that provided by rating methods. In this paper, we compare the results collected in a rating (mean opinion scores) experiment to the scaled results of a pairwise comparison experiment, the most common ranking method. We find a strong linear relationship between results of both methods, which, however, differs between content. To improve the relationship and unify the scale, we extend the experiment to include cross-content comparisons. We find that the cross-content comparisons reduce the confidence intervals for pairwise comparison results, but also improve the relationship with mean opinion scores.
https://eprints.bournemouth.ac.uk/30365/
Source: Scopus
The relation between MOS and pairwise comparisons and the importance of cross-content comparisons
Authors: Zerman, E., Hulusic, V., Valenzise, G., Mantiuk, R. and Dufaux, F.
Conference: Human Vision and Electronic Imaging Conference, IS&T International Symposium on Electronic Imaging (EI 2018)
Dates: 29 January-1 February 2018
Abstract:Subjective quality assessment is considered a reliable method for quality assessment of distorted stimuli for several multimedia applications. The experimental methods can be broadly categorized into those that rate and rank stimuli. Although ranking directly provides an order of stimuli rather than a continuous measure of quality, the experimental data can be converted using scaling methods into an interval scale, similar to that provided by rating methods. In this paper, we compare the results collected in a rating (mean opinion scores) experiment to the scaled results of a pairwise comparison experiment, the most common ranking method. We find a strong linear relationship between results of both methods, which, however, differs between content. To improve the relationship and unify the scale, we extend the experiment to include cross-content comparisons. We find that the cross-content comparisons reduce the confidence intervals for pairwise comparison results, but also improve the relationship with mean opinion scores.
https://eprints.bournemouth.ac.uk/30365/
Source: Manual
The relation between MOS and pairwise comparisons and the importance of cross-content comparisons
Authors: Zerman, E., Hulusic, V., Valenzise, G., Mantiuk, R. and Dufaux, F.
Conference: Human Vision and Electronic Imaging: IS&T International Symposium on Electronic Imaging (EI 2018)
Publisher: Society for Imaging Sciences and Technology
ISSN: 2470-1173
Abstract:Subjective quality assessment is considered a reliable method for quality assessment of distorted stimuli for several multimedia applications. The experimental methods can be broadly categorized into those that rate and rank stimuli. Although ranking directly provides an order of stimuli rather than a continuous measure of quality, the experimental data can be converted using scaling methods into an interval scale, similar to that provided by rating methods. In this paper, we compare the results collected in a rating (mean opinion scores) experiment to the scaled results of a pairwise comparison experiment, the most common ranking method. We find a strong linear relationship between results of both methods, which, however, differs between content. To improve the relationship and unify the scale, we extend the experiment to include cross-content comparisons. We find that the cross-content comparisons reduce the confidence intervals for pairwise comparison results, but also improve the relationship with mean opinion scores.
https://eprints.bournemouth.ac.uk/30365/
http://www.imaging.org/site/IST/IST/Conferences/EI/EI_2018/Conference/C_HVEI.aspx
Source: BURO EPrints