skip to main content
10.1145/1943552.1943563acmconferencesArticle/Chapter ViewAbstractPublication PagesmmsysConference Proceedingsconference-collections
research-article

Quantifying QoS requirements of network services: a cheat-proof framework

Published:23 February 2011Publication History

ABSTRACT

Despite all the efforts devoted to improving the QoS of networked multimedia services, the baseline for such improvements has yet to be defined. In other words, although it is well recognized that better network conditions generally yield better service quality, the exact minimum level of network QoS required to ensure satisfactory user experience remains an open question.

In this paper, we propose a general, cheat-proof framework that enables researchers to systematically quantify the minimum QoS needs for real-time networked multimedia services. Our framework has two major features: 1) it measures the quality of a service that users find intolerable by intuitive responses and therefore reduces the burden on experiment participants; and 2) it is cheat-proof because it supports systematic verification of the participants' inputs. Via a pilot study involving 38 participants, we verify the efficacy of our framework by proving that even inexperienced participants can easily produce consistent judgments. In addition, by cross-application and cross-service comparative analysis, we demonstrate the usefulness of the derived QoS thresholds. Such knowledge will serve important reference in the evaluation of competitive applications, application recommendation, network planning, and resource arbitration.

Skip Supplemental Material Section

Supplemental Material

110223_26192_09_acm.mp4

mp4

176.2 MB

References

  1. F. Agboma and A. Liotta. User centric assessment of mobile contents delivery. In Proceedings of the 6th Advances in Mobile Multimedia, pages 121--130, Dec. 2006.Google ScholarGoogle Scholar
  2. S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss. An architecture for differentiated services. RFC 2475, 1998.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. J. M. Bland and D. G. Altman. Multiple significance tests: the Bonferroni method. British Medical Journal, 310:170--170, 1995.Google ScholarGoogle ScholarCross RefCross Ref
  4. A. Bouch and M. A. Sasse. Network quality of service: What do users need? In Proceedings of the 4th International Distributed Conference, pages 21--23, 1999.Google ScholarGoogle Scholar
  5. R. Braden, D. Clark, and S. Shenkar. Integrated services in the Internet architecture: an overview. RFC 1633, 1994.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Y.-C. Chang, K.-T. Chen, C.-C. Wu, C.-J. Ho, and C.-L. Lei. Online game QoE evaluation using paired comparisons. In Proceedings of IEEE CQR 2010, June 2010.Google ScholarGoogle ScholarCross RefCross Ref
  7. W. Chang Feng, F. Chang, W. Chi Feng, and J. Walpole. A traffic characterization of popular on-line games. IEEE/ACM Transactions on Networking, 13(3):488--500, June 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. K.-T. Chen, C.-J. Chang, C.-C. Wu, Y.-C. Chang, and C.-L. Lei. Quadrant of Euphoria: A crowdsourcing platform for QoE assessment. IEEE Network, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. K.-T. Chen, C.-C. Wu, Y.-C. Chang, and C.-L. Lei. A crowdsourceable QoE evaluation framework for multimedia content. In Proceedings of ACM Multimedia 2009, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. de Cuetos and K. W. Ross. Adaptive rate control for streaming stored fine-grained scalable video. In Proceedings of ACM NOSSDAV'02, pages 3--12, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. J. Howe. The rise of crowdsourcing. Wired Magazine, 14(6):176--183, 2006.Google ScholarGoogle Scholar
  12. P. Hsueh, P. Melville, and V. Sindhwani. Data quality from crowdsourcing: a study of annotation selection criteria. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 27--35. Association for Computational Linguistics, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. T.-Y. Huang, K.-T. Chen, and P. Huang. Tuning the redundancy control algorithm of Skype for user satisfaction. In Proceedings of IEEE INFOCOM 2009, April 2009.Google ScholarGoogle Scholar
  14. T.-Y. Huang, P. Huang, K.-T. Chen, and P.-J. Wang. Can Skype be more satisfying? -- a QoE-centric study of the FEC mechanism in the internet-scale VoIP system. IEEE Network, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. ITU-T Recommandation P. 800. Methods for subjective determination of transmission quality, 1996.Google ScholarGoogle Scholar
  16. ITU-T Recommendation G.107. The E-model, a computational model for use in transmission planning, 2005.Google ScholarGoogle Scholar
  17. ITU-T Recommendation G.114. General recommendations on the transmission quality for an entire international telephone connection - one-way transmission time, 2003.Google ScholarGoogle Scholar
  18. R. Jain. Quality of experience. IEEE Multimedia, 11(1):96--97, Jan.-March 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. J. K. Kies. A psychophysical evaluation of frame rate in desktop video conferencing. In Proceedings of the Human Factors and Ergonomics Society 41st Annual Meeting, pages 310--314, 1997.Google ScholarGoogle ScholarCross RefCross Ref
  20. Y. Lee, J. Lou, J. Luo, and X. Shen. An efficient packet scheduling algorithm with deadline guarantees for input-queued switches. IEEE/ACM Transactions on Networking, 15(1):212--225, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Y. J. Liang, N. Farber, and B. Girod. Adaptive playout scheduling and loss concealment for voice communication over IP networks. IEEE Transactions on Multimedia, 5:532--543, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. J. D. McCarthy, M. A. Sasse, and D. Miras. Sharp or smooth?: Comparing the effects of quantization vs. frame rate for streamed video. In Proceedings of CHI 2004, pages 535--542, Mar. 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. D. Miras. A survey of network QoS needs of advanced internet applications. Technical report, Internet2 QoS Working Group, 2002.Google ScholarGoogle Scholar
  24. S. Nowak and S. Rüger. How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation. In Proceedings of the international conference on Multimedia information retrieval, pages 557--566. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. L. Pantel and L. C. Wolf. On the suitability of dead reckoning schemes for games. In Proceedings of ACM NetGames'02, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. A. Parasuraman, V. A. Zeithaml, and L. L. Berry. Alternative scales for measuring service quality: a comparative assessment based on psychometric and diagnostic criteria. Journal of Retailing, 70(3):201--230, 1994.Google ScholarGoogle ScholarCross RefCross Ref
  27. Z. Qiao, L. Sun, N. Heilemann, and E. Ifeachor. A new method for VoIP quality of service control use combined adaptive sender rate and priority marking. In Proceedings of IEEE ICC'04, pages 1473--1477, 2004.Google ScholarGoogle Scholar
  28. B. Sat and B. W. Wah. Playout scheduling and loss-concealments in VoIP for optimizing conversational voice communication quality. In Proceedings of ACM Multimedia'07, pages 137--146, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. C. J. Sreenan, J.-C. Chen, P. Agrawal, and B. Narendran. Delay reduction techniques for playout buffering. IEEE Transactions on Multimedia, 2:88--100, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. S. Stevens. Mathematics, measurement, and psychophysics. Handbook of experimental psychology, pages 1--49, 1951.Google ScholarGoogle Scholar
  31. F. Wilcoxon. Individual comparisons by ranking methods. Biometrics, 1:80--83, 1945.Google ScholarGoogle ScholarCross RefCross Ref
  32. C.-C. Wu, K.-T. Chen, C.-Y. Huang, and C.-L. Lei. An empirical evaluation of VoIP playout buffer dimensioning in Skype, Google Talk, and MSN Messenger. In Proceedings of ACM NOSSDAV 2009, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Quantifying QoS requirements of network services: a cheat-proof framework

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Conferences
              MMSys '11: Proceedings of the second annual ACM conference on Multimedia systems
              February 2011
              294 pages
              ISBN:9781450305181
              DOI:10.1145/1943552

              Copyright © 2011 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 23 February 2011

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article

              Acceptance Rates

              Overall Acceptance Rate176of530submissions,33%

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader