The relation between MOS and pairwise comparisons and the importance of cross-content comparisons

Authors: Zerman, E., Hulusic, V., Valenzise, G., Mantiuk, R. and Dufaux, F.

http://eprints.bournemouth.ac.uk/30365/

Start date: 29 January 2018

Subjective quality assessment is considered a reliable method for quality assessment of distorted stimuli for several multimedia applications. The experimental methods can be broadly categorized into those that rate and rank stimuli. Although ranking directly provides an order of stimuli rather than a continuous measure of quality, the experimental data can be converted using scaling methods into an interval scale, similar to that provided by rating methods. In this paper, we compare the results collected in a rating (mean opinion scores) experiment to the scaled results of a pairwise comparison experiment, the most common ranking method. We find a strong linear relationship between results of both methods, which, however, differs between content. To improve the relationship and unify the scale, we extend the experiment to include cross-content comparisons. We find that the cross-content comparisons reduce the confidence intervals for pairwise comparison results, but also improve the relationship with mean opinion scores.

This data was imported from Scopus:

Authors: Zerman, E., Hulusic, V., Valenzise, G., Mantiuk, R.K. and Dufaux, F.

http://eprints.bournemouth.ac.uk/30365/

Journal: IS and T International Symposium on Electronic Imaging Science and Technology

eISSN: 2470-1173

DOI: 10.2352/ISSN.2470-1173.2018.14.HVEI-517

© 2018, Society for Imaging Science and Technology. Subjective quality assessment is considered a reliable method for quality assessment of distorted stimuli for several multimedia applications. The experimental methods can be broadly categorized into those that rate and rank stimuli. Although ranking directly provides an order of stimuli rather than a continuous measure of quality, the experimental data can be converted using scaling methods into an interval scale, similar to that provided by rating methods. In this paper, we compare the results collected in a rating (mean opinion scores) experiment to the scaled results of a pairwise comparison experiment, the most common ranking method. We find a strong linear relationship between results of both methods, which, however, differs between content. To improve the relationship and unify the scale, we extend the experiment to include cross-content comparisons. We find that the cross-content comparisons reduce the confidence intervals for pairwise comparison results, but also improve the relationship with mean opinion scores.

The data on this page was last updated at 11:59 on June 25, 2019.